![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
text-moderate
Advanced tools
A comprehensive JavaScript library for content moderation, including profanity filtering, sentiment analysis, and toxicity detection. Leveraging advanced algorithms and external APIs, TextModerate provides developers with tools to create safer and more po
TextModerate is a JavaScript library developed for text analysis. It integrates profanity/badwords filtering, sentiment analysis, and toxicity detection capabilities. By leveraging the Badwords lists, AFINN-165 wordlist, Emoji Sentiment Ranking, and the Perspective API. TextModerate offers a robust toolkit for enhancing content moderation and fostering healthier online interactions.
English and French for Profanity and Sentiment Analysis, can be extended. Toxicity Detection works for any language.
npm install text-moderate --save
Censor or identify profanity within text inputs automatically. Using badwords-list from Google's WDYL Project.
const TextModerate = require('text-moderate');
const textModerate = new TextModerate();
console.log(textModerate.isProfane("Don't be an ash0le"))
// Output: True
console.log(textModerate.clean("Don't be an ash0le"));
// Output: "Don't be an ******"
var customTextModerate = new TextModerate({ placeHolder: 'x'});
customTextModerate.clean("Don't be an ash0le"); // Don't be an xxxxxx
textModerate.addWords('some', 'bad', 'word');
textModerate.clean("some bad word!") // **** *** ****!
textModerate.removeWords('hells', 'sadist');
textModerate.clean("some hells word!"); //some hells word!
This functions helps maintain respectful communication by recognizing and replacing recognized profane words with placeholders.
Evaluate textual sentiment, identifying whether the content is positive, neutral, or negative.
const result = textModerate.analyzeSentiment('Cats are amazing.');
console.dir(result);
Example output:
{
"score": 3,
"comparative": 1,
"calculation": [{"amazing": 3}],
"tokens": ["cats", "are", "amazing"],
"words": ["amazing"],
"positive": ["amazing"],
"negative": []
}
The output demonstrates a positive sentiment score, reflecting the text's overall positive tone.
Here, "comparative" Score can be seen as main metric if it's zero netural and greater 0.5 is positive and less than -0.5 is negative
var frLanguage = {
labels: { 'stupide': -2 }
};
textModerate.registerLanguage('fr', frLanguage);
var result = textModerate.analyzeSentiment('Le chat est stupide.', { language: 'fr' });
console.dir(result); // Score: -2, Comparative: -0.5
Analyze text for toxicity with the Perspective API to maintain constructive discourse. Perspective API is developed and maintianed by Google.
const API_KEY = 'your_api_key_here'; // Replace with your Persective API key from Google API Services
textModerate.analyzeToxicity("Your text to analyzeSentiment", API_KEY)
.then(result => console.log(JSON.stringify(result)))
.catch(err => console.error(err));
The Perspective API is currently free API with rate limit of 60 per minute. (As of 2023 Decemeber) Link: https://support.perspectiveapi.com/s/docs-get-started?language=en_US
Sample output:
{
"attributeScores": {
"TOXICITY": {
"summaryScore": {
"value": 0.021196328,
"type": "PROBABILITY"
}
}
},
"languages": ["en"],
"detectedLanguages": ["en"]
}
This provides a toxicity score, indicating how likely the text is perceived as toxic, aiding in moderating content effectively. According to this paper and experiments:- soft toxicity score is 0.5 and hard toxicty score is 0.7 Refer to this paper : https://aclanthology.org/2021.findings-emnlp.210.pdf
TextModerate constructor. Combines functionalities of word filtering and sentiment analysis.
Parameters
options
Object TextModerate instance options. (optional, default {}
)
options.emptyList
boolean Instantiate filter with no blacklist. (optional, default false
)options.list
array Instantiate filter with custom list. (optional, default []
)options.placeHolder
string Character used to replace profane words. (optional, default '*'
)options.regex
string Regular expression used to sanitize words before comparing them to blacklist. (optional, default /[^a-zA-Z0-9|\$|\@]|\^/g
)options.replaceRegex
string Regular expression used to replace profane words with placeHolder. (optional, default /\w/g
)options.splitRegex
string Regular expression used to split a string into words. (optional, default /\b/
)options.sentimentOptions
Object Options for sentiment analysis. (optional, default {}
)Determine if a string contains profane language.
Parameters
string
string String to evaluate for profanity.Replace a word with placeHolder characters.
Parameters
string
string String to replace.Evaluate a string for profanity and return an edited version.
Parameters
string
string Sentence to filter.Add word(s) to blacklist filter / remove words from whitelist filter.
Parameters
words
...anyword
...string Word(s) to add to blacklist.Add words to whitelist filter.
Parameters
words
...anyword
...string Word(s) to add to whitelist.Registers the specified language.
Parameters
languageCode
String Two-digit code for the language to register.language
Object The language module to register.Performs sentiment analysis on the provided input 'phrase'.
Parameters
phrase
String Input phrase.opts
Object Options. (optional, default {}
)callback
function Optional callback.Returns Object
Analyzes the toxicity of a given text using the Perspective API.
Parameters
Returns Promise A promise that resolves with the analysis result.
Remove special characters and return an array of tokens (words).
Parameters
input
string Input stringReturns array Array of tokens
Registers the specified language
Parameters
languageCode
String Two-digit code for the language to registerlanguage
Object The language module to registerRetrieves a language object from the cache, or tries to load it from the set of supported languages
Parameters
languageCode
String Two-digit code for the language to fetchReturns AFINN-165 weighted labels for the specified language
Parameters
languageCode
String Two-digit language codeReturns Object
Applies a scoring strategy for the current token
Parameters
languageCode
String Two-digit language codetokens
Array Tokens of the phrase to analyzecursor
int Cursor of the current token being analyzedtokenScore
int The score of the current token being analyzedAFINN is a list of words rated for valence with an integer between minus five (negative) and plus five (positive). Sentiment analysis is performed by cross-checking the string tokens (words, emojis) with the AFINN list and getting their respective scores. The comparative score is simply: sum of each token / number of tokens
. So for example let's take the following:
I love cats, but I am allergic to them.
That string results in the following:
{
score: 1,
comparative: 0.1111111111111111,
calculation: [ { allergic: -2 }, { love: 3 } ],
tokens: [
'i',
'love',
'cats',
'but',
'i',
'am',
'allergic',
'to',
'them'
],
words: [
'allergic',
'love'
],
positive: [
'love'
],
negative: [
'allergic'
]
}
In this case, love has a value of 3, allergic has a value of -2, and the remaining tokens are neutral with a value of 0. Because the string has 9 tokens the resulting comparative score looks like:
(3 + -2) / 9 = 0.111111111
This approach leaves you with a mid-point of 0 and the upper and lower bounds are constrained to positive and negative 5 respectively (the same as each token! 😸). For example, let's imagine an incredibly "positive" string with 200 tokens and where each token has an AFINN score of 5. Our resulting comparative score would look like this:
(max positive score * number of tokens) / number of tokens
(5 * 200) / 200 = 5
Tokenization works by splitting the lines of input string, then removing the special characters, and finally splitting it using spaces. This is used to get list of words in the string.
To incorporate the "Future Improvement" section into your existing documentation while maintaining the flow and structure, you can simply add it right before the "Credits" section. Here's how it would look:
The development and enhancement of the "text-moderate" library will continue to focus on making the tool more versatile and effective for developers and content managers. Planned future improvements include:
More Languages Support: Expanding the library to support additional languages for profanity filtering and sentiment analysis, making it more accessible and useful for a global audience.
Sentiment Analysis in a More Robust Way: Enhancing the sentiment analysis feature to provide deeper insights into the emotional tone of texts, possibly by incorporating machine learning techniques for greater accuracy.
Toxicity Category Attribute Along with Score: Introducing a detailed breakdown of toxicity attributes (e.g., insult, threat, obscenity) alongside the overall toxicity score to give users a more nuanced understanding of content analysis results.
By focusing on these areas, "text-moderate" aims to remain at the forefront of content moderation technology, providing developers with the tools they need to maintain positive and safe online environments.
The MIT License (MIT)
Copyright (c) 2013 Michael Price
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FAQs
A comprehensive JavaScript library for content moderation, including profanity filtering, sentiment analysis, and toxicity detection. Leveraging advanced algorithms and external APIs, TextModerate provides developers with tools to create safer and more po
The npm package text-moderate receives a total of 0 weekly downloads. As such, text-moderate popularity was classified as not popular.
We found that text-moderate demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.