
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
Ultra-fast, zero-dependency profanity detection engine. Ships with Turkish, English, Spanish & German — extensible to any language. Lazy compilation, deep agglutination support, ReDoS-safe regex patterns

Multi-language profanity detection and filtering engine, designed Turkish-first and extensible to any language. Not a naive blacklist — a multi-layered normalization and pattern engine that catches what simple string matching misses.
Ships with Turkish (flagship, full coverage), English, Spanish, and German built-in. Add any language with a folder and two files, or extend at runtime via extendDictionary.
Turkce: Turkce oncelikli, her dile genisletilebilir kufur tespit ve filtreleme motoru. Leet speak, karakter tekrari, ayirici karakterler ve Turkce ek sistemi destegi ile yaratici kufur denemelerini yakalar. Sifir bagimlilik, TypeScript, ~14 KB gzipped.
extendDictionaryTurkish profanity evasion is creative. Users write s2k, $1kt1r, s.i.k.t.i.r, SİKTİR, siiiiiktir, i8ne, or*spu, pu$ttt, 6öt — and expect to get away with it. Turkish is agglutinative — a single root like sik spawns dozens of forms: siktiler, sikerim, siktirler, sikimsonik. Manually listing every variant doesn't scale.
terlik.js catches all of these with a suffix engine that automatically recognizes Turkish grammatical suffixes on profane roots. Here's what a single call handles:
import { Terlik } from "terlik.js";
const terlik = new Terlik();
terlik.clean("s2mle yüzle$ g0t_v3r3n o r o s p u pezev3nk i8ne pu$ttt or*spu");
// "***** yüzle$ ********* *********** ******** **** ****** ******"
// 7 matches, 0 false positives, <2ms
npm install terlik.js
# or
pnpm add terlik.js
# or
yarn add terlik.js
import { Terlik } from "terlik.js";
// Turkish (default)
const tr = new Terlik();
tr.containsProfanity("siktir git"); // true
tr.clean("siktir git burdan"); // "****** git burdan"
// English
const en = new Terlik({ language: "en" });
en.containsProfanity("what the fuck"); // true
en.containsProfanity("siktir git"); // false (Turkish not loaded)
// Spanish & German
const es = new Terlik({ language: "es" });
const de = new Terlik({ language: "de" });
es.containsProfanity("hijo de puta"); // true
de.containsProfanity("scheiße"); // true
| Evasion technique | Example | Detected as |
|---|---|---|
| Plain text | siktir | sik |
| Turkish İ/I | SİKTİR | sik |
| Leet speak | $1kt1r, @pt@l | sik, aptal |
| Visual leet (TR) | 8ok, 6öt, i8ne, s2k | bok, göt, ibne, sik |
| Turkish number words | s2mle (s+iki+mle) | sik (sikimle) |
| Separators | s.i.k.t.i.r, s_i_k | sik |
| Spaces | o r o s p u | orospu |
| Char repetition | siiiiiktir, pu$ttt | sik, puşt |
| Mixed punctuation | or*spu, g0t_v3r3n | orospu, göt |
| Combined | $1kt1r g0t_v3r3n | both caught |
| Suffix forms | siktiler, orospuluk, gotune | sik, orospu, göt |
| Suffix + evasion | s.i.k.t.i.r.l.e.r, $1kt1rler | sik |
| Suffix chaining | siktirler (sik+tir+ler) | sik |
| Deep agglutination | siktiğimin, sikermisiniz, siktirmişcesine | sik |
| Zero-width chars | s\u200Bi\u200Bk\u200Bt\u200Bi\u200Br (ZWSP/ZWNJ/ZWJ) | sik |
| Phonetic (EN) | phuck, phucking | fuck |
| Extended leet (EN) | 8itch, s#it, ni66er | bitch, shit, nigger |
Whitelist prevents false positives on legitimate words:
terlik.containsProfanity("Amsterdam"); // false
terlik.containsProfanity("sikke"); // false (Ottoman coin)
terlik.containsProfanity("ambulans"); // false
terlik.containsProfanity("siklet"); // false (boxing weight class)
terlik.containsProfanity("memur"); // false
terlik.containsProfanity("malzeme"); // false
terlik.containsProfanity("ama"); // false (conjunction)
terlik.containsProfanity("amir"); // false
terlik.containsProfanity("dolmen"); // false
Six-stage normalization pipeline (language-aware), then pattern matching:
input
→ lowercase (locale-aware: "tr", "en", "es", "de")
→ char folding (language-specific: İ→i, ñ→n, ß→ss, ä→a, ...)
→ number expansion (optional, e.g. Turkish: s2k → sikik)
→ leet speak decode (0→o, 1→i, @→a, $→s, ...)
→ punctuation removal (between letters: s.i.k → sik)
→ repeat collapse (siiiiik → sik)
→ pattern matching (dynamic regex with language-specific char classes)
→ whitelist filtering
→ result
Each language has its own char map, leet map, char classes, and optional number expansions. The engine is language-agnostic — only the data is language-specific. This means any language can be added without modifying the core engine.
For suffixable roots, the engine appends an optional suffix group (up to 2 chained suffixes). Turkish has 83 suffixes (including question particles and adverbial forms), English has 9, Spanish has 13, German has 8.
Community contributions to existing language packs (new words, variants, whitelist entries) and entirely new language packs are welcome! See CONTRIBUTING.md for step-by-step instructions.
Each language lives in its own folder under src/lang/:
src/lang/
tr/
config.ts ← charMap, leetMap, charClasses, locale
dictionary.json ← entries, suffixes, whitelist
en/
config.ts
dictionary.json
...
Dictionary format (community-friendly JSON, no TypeScript needed):
{
"version": 1,
"suffixes": ["ing", "ed", "er", "s"],
"entries": [
{ "root": "fuck", "variants": ["fucking", "fucker"], "severity": "high", "category": "sexual", "suffixable": true }
],
"whitelist": ["assassin", "class", "grass"]
}
Categories: sexual, insult, slur, general. Severity: high, medium, low.
src/lang/xx/ folderdictionary.json (entries, suffixes, whitelist)config.ts (locale, charMap, leetMap, charClasses)src/lang/index.ts (one import line)terlik.js ships with a deliberately narrow dictionary — the goal is to minimize false positives while catching real-world evasion patterns. The dictionary is not a massive word list; it's a curated set of roots + variants that the pattern engine expands through normalization, leet decoding, separator tolerance, and suffix chaining.
| Language | Status | Roots | Explicit Variants | Suffixes | Whitelist | Effective Forms |
|---|---|---|---|---|---|---|
| Turkish | Flagship | 39 | 115 | 83 | 67 | ~3,000+ |
| English | Full | 56 | 185 | 9 | 96 | ~2,000+ |
| Spanish | Community | 29 | 101 | 13 | 21 | ~500+ |
| German | Community | 28 | 67 | 8 | 6 | ~300+ |
"Effective forms" = roots × normalization variants × suffix combinations × evasion patterns. A root like sik with 83 possible suffixes, leet decoding, separator tolerance, and repeat collapse produces thousands of detectable surface forms.
Add your language! The engine is language-agnostic. See Adding a New Language or use
extendDictionaryfor runtime extension.
orospucocugu, motherfucker, hijoputa, hurensohncustomListaddWords() at runtimeA large dictionary maximizes recall but tanks precision. In production chat systems, false positives are worse than false negatives — blocking "class" or "grass" because the dictionary is too broad erodes user trust. terlik.js defaults to high precision and lets you widen coverage per your needs:
The
sık/sikparadox: Turkishsık(frequent/tight) normalizes tosikbecauseı→ichar folding is required to catch evasions likes1kt1r. Makingsiksuffix-aware would flagsıkıntı(trouble),sıkma(squeeze),sıkı(tight) — extremely common words. Instead, deep agglutination forms likesiktiğiminandsikermisinizare added as explicit variants. This is a deliberate precision-over-recall tradeoff.
// Add domain-specific words
terlik.addWords(["customSlang", "anotherWord"]);
// Or at construction time
const terlik = new Terlik({
customList: ["customSlang", "anotherWord"],
whitelist: ["legitimateWord"],
});
// Remove a built-in word if it causes false positives in your domain
terlik.removeWords(["damn"]);
terlik.js uses lazy compilation — new Terlik() is near-instant (~1.5ms). Regex patterns are compiled on the first detect() call, not at construction time. This eliminates startup cost when creating multiple instances.
| Phase | Cost | When |
|---|---|---|
new Terlik() | ~1.5ms | Construction (lookup tables only) |
First detect() | ~200-700ms | Lazy regex compilation + V8 JIT warmup |
| Subsequent calls | <1ms | Patterns cached, JIT optimized |
Where do you want to pay the compilation cost?
// Option A: Background warmup (recommended for servers)
// Construction is instant. Patterns compile in the next event loop tick.
// If a request arrives before warmup finishes, it compiles synchronously.
const terlik = new Terlik({ backgroundWarmup: true });
app.post("/chat", (req, res) => {
const cleaned = terlik.clean(req.body.message); // <1ms (warmup already done)
});
// Option B: Explicit warmup at startup
const terlik = new Terlik();
terlik.containsProfanity("warmup"); // Forces compilation here
app.post("/chat", (req, res) => {
const cleaned = terlik.clean(req.body.message); // <1ms
});
// Option C: Lazy (pay on first request)
const terlik = new Terlik(); // ~1.5ms
app.post("/chat", (req, res) => {
const cleaned = terlik.clean(req.body.message); // First call: ~500ms, then <1ms
});
// Option D: Multi-language warmup
const cache = Terlik.warmup(["tr", "en", "es", "de"]);
app.post("/chat", (req, res) => {
const lang = req.body.language;
const cleaned = cache.get(lang)!.clean(req.body.message); // <1ms
});
Important: Never create
new Terlik()per request. A single cached instance handles requests in microseconds.
Serverless (Lambda, Vercel, Cloudflare Workers): Do NOT use
backgroundWarmup. ThesetTimeoutcallback may never fire because serverless runtimes freeze the process between invocations. Use explicit warmup instead:const t = new Terlik(); t.containsProfanity("warmup");at module scope.
Benchmark results (Apple Silicon, single core, msgs/sec):
| Scenario | msgs/sec |
|---|---|
| Clean messages (no matches) | ~193,000 |
| Mixed messages (balanced mode) | ~151,000 |
| Suffixed dirty messages | ~142,000 |
| Strict mode | ~390,000 |
| Loose mode (with fuzzy) | ~8,400 |
Note: Loose/fuzzy mode is ~18x slower than balanced mode due to O(n*m) similarity computation. Use it only when typo tolerance is critical, not as a default.
Head-to-head comparison on a 290-sample English corpus covering plain text, variants, leet speak, separator evasion, char repetition, combined evasion, false-positive traps, and edge cases. All libraries tested with default settings.
| Library | F1 | Precision | Recall | FPR | check() ops/sec | clean() ops/sec |
|---|---|---|---|---|---|---|
| terlik.js | 100.0% | 100.0% | 100.0% | 0.0% | 67,623 | 71,321 |
| obscenity | 81.7% | 97.4% | 70.4% | 2.3% | 71,914 | 49,978 |
| bad-words | 66.1% | 100.0% | 49.4% | 0.0% | 2,831 | 557 |
| allprofanity | 59.7% | 100.0% | 42.6% | 0.0% | 45,450 | 45,162 |
terlik.js achieves perfect detection — 100% precision, 100% recall, zero false positives — with competitive throughput (~68K check ops/sec, fastest clean() at 71K ops/sec). It catches 100% of separator and repetition evasions that other libraries miss entirely. See full methodology, per-category breakdown, and limitations.
Throughput note: The multi-pass detection pipeline (NFKD, Cyrillic confusable mapping, CamelCase decompounding) costs ~17% vs a naive single-pass approach — this is what enables 100% recall vs obscenity's 70%. Optional toggles (
disableLeetDecode,disableCompound) can recover ~5-8% for controlled inputs. Safety layers (NFKD, diacritics, Cyrillic) are always active. See full toggle guide.Transparency: This benchmark is maintained by the terlik.js team. Dataset, adapters, and runner are open source. Reproduce with
pnpm bench:compare. We document every false positive and miss — see the full report.
Measured on a labeled corpus of 388 samples across 4 languages (profane + clean + whitelist + edge cases):
| Language | Mode | Precision | Recall | F1 | FPR | FNR |
|---|---|---|---|---|---|---|
| TR | strict | 100.0% | 88.6% | 93.9% | 0.0% | 11.4% |
| TR | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |
| TR | loose | 99.1% | 100.0% | 99.5% | 1.6% | 0.0% |
| EN | strict | 100.0% | 95.5% | 97.7% | 0.0% | 4.5% |
| EN | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |
| EN | loose | 98.5% | 100.0% | 99.2% | 2.0% | 0.0% |
| ES | strict | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% |
| ES | balanced | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% |
| ES | loose | 100.0% | 96.7% | 98.3% | 0.0% | 3.3% |
| DE | strict | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |
| DE | balanced | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |
| DE | loose | 100.0% | 100.0% | 100.0% | 0.0% | 0.0% |
Mode characteristics:
Reproduce: pnpm bench:accuracy — outputs per-category breakdown, failure list, and JSON results.
const terlik = new Terlik({
language: "tr", // built-in: "tr" | "en" | "es" | "de" (default: "tr")
mode: "balanced", // "strict" | "balanced" | "loose"
maskStyle: "stars", // "stars" | "partial" | "replace"
replaceMask: "[***]", // mask text for "replace" style
customList: ["customword"], // additional words to detect
whitelist: ["safeword"], // additional words to whitelist
enableFuzzy: false, // enable fuzzy matching
fuzzyThreshold: 0.8, // similarity threshold (0-1). 0.8 ≈ 1 typo per 5 chars
fuzzyAlgorithm: "levenshtein", // "levenshtein" | "dice"
maxLength: 10000, // truncate input beyond this
backgroundWarmup: false, // compile patterns in background via setTimeout
extendDictionary: undefined, // DictionaryData object to merge with built-in dictionary
});
| Mode | What it does | Best for |
|---|---|---|
strict | Normalize + exact match only | Minimum false positives |
balanced | Normalize + pattern matching with separator/leet tolerance | General use (default) |
loose | Pattern + fuzzy matching (Levenshtein or Dice) | Maximum coverage, typo tolerance |
terlik.containsProfanity(text, options?): booleanQuick boolean check. Runs full detection internally and returns true if any match exists.
terlik.getMatches(text, options?): MatchResult[]Returns all matches with details:
interface MatchResult {
word: string; // matched text from original input
root: string; // dictionary root word
index: number; // position in original text
severity: "high" | "medium" | "low";
method: "exact" | "pattern" | "fuzzy";
}
terlik.clean(text, options?): stringReturns text with profanity masked. Three styles:
terlik.clean("siktir git"); // "****** git"
terlik.clean("siktir git", { maskStyle: "partial" }); // "s****r git"
terlik.clean("siktir git", { maskStyle: "replace" }); // "[***] git"
terlik.addWords(words) / removeWords(words)Runtime dictionary modification. Recompiles patterns automatically.
terlik.addWords(["customword"]);
terlik.containsProfanity("customword"); // true
terlik.removeWords(["salak"]);
terlik.containsProfanity("salak"); // false
Terlik.warmup(languages, options?): Map<string, Terlik>Static method. Creates and JIT-warms instances for multiple languages at once.
const cache = Terlik.warmup(["tr", "en", "es", "de"]);
cache.get("en")!.containsProfanity("fuck"); // true — no cold start
extendDictionary OptionMerge an external dictionary with the built-in one. Useful for teams managing custom word lists without modifying the core package:
const terlik = new Terlik({
extendDictionary: {
version: 1,
suffixes: ["ci", "cu"],
entries: [
{ root: "customword", variants: ["cust0mword"], severity: "high", category: "general", suffixable: true },
],
whitelist: ["safeterm"],
},
});
terlik.containsProfanity("customword"); // true
terlik.containsProfanity("customwordci"); // true (suffix match)
terlik.containsProfanity("safeterm"); // false (whitelisted)
terlik.containsProfanity("siktir"); // true (built-in still works)
The extension dictionary must follow the same schema as built-in dictionaries. Duplicate roots are skipped; suffixes and whitelist entries are merged. Pattern cache is disabled for extended instances.
terlik.language: stringRead-only property. Returns the language code of the instance.
getSupportedLanguages(): string[]Returns all available language codes.
import { getSupportedLanguages } from "terlik.js";
getSupportedLanguages(); // ["tr", "en", "es", "de"]
normalize(text): stringStandalone export. Uses Turkish locale by default.
import { normalize, createNormalizer } from "terlik.js";
normalize("S.İ.K.T.İ.R"); // "siktir" (Turkish default)
// Custom normalizer for any language
const deNormalize = createNormalizer({
locale: "de",
charMap: { ä: "a", ö: "o", ü: "u", ß: "ss" },
leetMap: { "0": "o", "3": "e" },
});
deNormalize("Scheiße"); // "scheisse"
972 tests covering all built-in languages, 39 Turkish root words, 56 English roots, suffix detection, lazy compilation, multi-language isolation, normalization, fuzzy matching, cleaning, integration, ReDoS hardening, attack surface coverage, external dictionary merging, and edge cases:
pnpm test # run once
pnpm test:watch # watch mode
An interactive browser-based test environment is included. Chat interface on the left, real-time process log on the right — see exactly what terlik.js does at each step (normalization, pattern matching, match details, timing).
pnpm dev:live # http://localhost:2026
See tools/README.md for details.
See Integration Guide for Express, Fastify, Next.js, Nuxt, Socket.io, and multi-language server examples.
pnpm install # install dependencies
pnpm test # run tests
pnpm test:coverage # run tests with coverage report
pnpm typecheck # TypeScript type checking
pnpm build # build ESM + CJS output
pnpm bench # run performance benchmarks
pnpm bench:compare # run comparison benchmark vs alternatives
pnpm dev:live # start interactive test server
Pre-commit hooks (via Husky) automatically run type checking on staged .ts files.
See CONTRIBUTING.md for contribution guidelines.
See CHANGELOG.md for the full version history.
MIT
FAQs
Ultra-fast, zero-dependency profanity detection engine. Ships with Turkish, English, Spanish & German — extensible to any language. Lazy compilation, deep agglutination support, ReDoS-safe regex patterns
The npm package terlik.js receives a total of 40 weekly downloads. As such, terlik.js popularity was classified as not popular.
We found that terlik.js demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.