Product
Introducing Socket Optimize
We're excited to introduce Socket Optimize, a powerful CLI command to secure open source dependencies with tested, optimized package overrides.
compromise
Advanced tools
npm install compromise
import nlp from 'compromise'
let doc = nlp('she sells seashells by the seashore.')
doc.verbs().toPastTense()
doc.text()
// 'she sold seashells by the seashore.'
if (doc.has('simon says #Verb')) {
return true
}
let doc = nlp(entireNovel)
doc.match('the #Adjective of times').text()
// "the blurst of times?"
and get data:
import plg from 'compromise-speech'
nlp.extend(plg)
let doc = nlp('Milwaukee has certainly had its share of visitors..')
doc.compute('syllables')
doc.places().json()
/*
[{
"text": "Milwaukee",
"terms": [{
"normal": "milwaukee",
"syllables": ["mil", "wau", "kee"]
}]
}]
*/
quickly flip between parsed and unparsed forms:
let doc = nlp('soft and yielding like a nerf ball')
doc.out({
'#Adjective': (m) => `<i>${m.text()}</i>`
})
// '<i>soft</i> and <i>yielding</i> like a nerf ball'
avoid the problems of brittle parsers:
let doc = nlp("we're not gonna take it..")
doc.has('gonna') // true
doc.has('going to') // true (implicit)
// transform
doc.contractions().expand()
dox.text()
// 'we are not going to take it..'
and whip stuff around like it's data:
let doc = nlp('ninety five thousand and fifty two')
doc.numbers().add(20)
doc.text()
// 'ninety five thousand and seventy two'
-because it actually is-
let doc = nlp('the purple dinosaur')
doc.nouns().toPlural()
doc.text()
// 'the purple dinosaurs'
Use it on the client-side:
<script src="https://unpkg.com/compromise"></script>
<script>
var doc = nlp('two bottles of beer')
doc.numbers().minus(1)
document.body.innerHTML = doc.text()
// 'one bottle of beer'
</script>
or likewise:
import nlp from 'compromise'
var doc = nlp('London is calling')
doc.verbs().toNegative()
// 'London is not calling'
compromise is ~200kb (minified):
it's pretty fast. It can run on keypress:
it works mainly by conjugating all forms of a basic word list.
The final lexicon is ~14,000 words:
you can read more about how it works, here. it's weird.
okay -
compromise/one
A tokenizer
of words, sentences, and punctuation.
import nlp from 'compromise/one'
let doc = nlp("Wayne's World, party time")
let data = doc.json()
/* [{
normal:"wayne's world party time",
terms:[{ text: "Wayne's", normal: "wayne" },
...
]
}]
*/
compromise/one splits your text up, wraps it in a handy API,
/one is quick - most sentences take a 10th of a millisecond.
It can do ~1mb of text a second - or 10 wikipedia pages.
Infinite jest is takes 3s.
compromise/two
A part-of-speech
tagger, and grammar-interpreter.
import nlp from 'compromise/two'
let doc = nlp("Wayne's World, party time")
let str = doc.match('#Possessive #Noun').text()
// "Wayne's World"
compromise/two automatically calculates the very basic grammar of each word.
this is more useful than people sometimes realize.
Light grammar helps you write cleaner templates, and get closer to the information.
compromise has 83 tags, arranged in a handsome graph.
#FirstName → #Person → #ProperNoun → #Noun
you can see the grammar of each word by running doc.debug()
you can see the reasoning for each tag with nlp.verbose('tagger')
.
if you prefer Penn tags, you can derive them with:
let doc = nlp('welcome thrillho')
doc.compute('penn')
doc.json()
compromise/three
Phrase
and sentence tooling.
import nlp from 'compromise/three'
let doc = nlp("Wayne's World, party time")
let str = doc.people().normalize().text()
// "wayne"
compromise/three is a set of tooling to zoom into and operate on parts of a text.
.numbers()
grabs all the numbers in a document, for example - and extends it with new methods, like .subtract()
.
When you have a phrase, or group of words, you can see additional metadata about it with .json()
let doc = nlp("four out of five dentists")
console.log(doc.fractions().json())
/*[{
text: 'four out of five',
terms: [ [Object], [Object], [Object], [Object] ],
fraction: { numerator: 4, denominator: 5, decimal: 0.8 }
}
]*/
let doc = nlp("$4.09CAD")
doc.money().json()
/*[{
text: '$4.09CAD',
terms: [ [Object] ],
number: { prefix: '$', num: 4.09, suffix: 'cad'}
}
]*/
.found [getter] - is this document empty?
.docs [getter] get term objects as json
.length [getter] - count the # of characters in the document (string length)
.isView [getter] - identify a compromise object
.compute() - run a named analysis on the document
.clone() - deep-copy the document, so that no references remain
.termList() - return a flat list of all Term objects in match
.cache({}) - freeze the current state of the document, for speed-purposes
.uncache() - un-freezes the current state of the document, so it may be transformed
(match methods use the match-syntax.)
.match('') - return a new Doc, with this one as a parent
.not('') - return all results except for this
.matchOne('') - return only the first match
.if('') - return each current phrase, only if it contains this match ('only')
.ifNo('') - Filter-out any current phrases that have this match ('notIf')
.has('') - Return a boolean if this match exists
.before('') - return all terms before a match, in each phrase
.after('') - return all terms after a match, in each phrase
.union() - return combined matches without duplicates
.intersection() - return only duplicate matches
.complement() - get everything not in another match
.settle() - remove overlaps from matches
.growRight('') - add any matching terms immediately after each match
.growLeft('') - add any matching terms immediately before each match
.grow('') - add any matching terms before or after each match
.splitOn('') - return a Document with three parts for every match ('splitOn')
.splitBefore('') - partition a phrase before each matching segment
.splitAfter('') - partition a phrase after each matching segment
.lookup([]) - quick find for an array of string matches
.autofill() - create type-ahead assumptions on the document
(these methods are on the main nlp
object)
'football captain' → 'football captains'
'turnovers' → 'turnover'
.verbs().json() - overloaded output with verb metadata
.verbs().parse() - get tokenized verb-phrase
.verbs().subjects() - what is doing the verb action
.verbs().adverbs() - return the adverbs describing this verb.
.verbs().isSingular() - return singular verbs like 'spencer walks'
.verbs().isPlural() - return plural verbs like 'we walk'
.verbs().isImperative() - only instruction verbs like 'eat it!'
.verbs().toPastTense() - 'will go' → 'went'
.verbs().toPresentTense() - 'walked' → 'walks'
.verbs().toFutureTense() - 'walked' → 'will walk'
.verbs().toInfinitive() - 'walks' → 'walk'
.verbs().toGerund() - 'walks' → 'walking'
.verbs().conjugate() - return all forms of these verbs
.verbs().isNegative() - return verbs with 'not', 'never' or 'no'
.verbs().isPositive() - only verbs without 'not', 'never' or 'no'
.verbs().toNegative() - 'went' → 'did not go'
.verbs().toPositive() - "didn't study" → 'studied'
5
five
fifth
or 5th
five
or 5
'$2.50'
he walks
-> he walked
he walked
-> he walks
he walks
-> he will walk
he walks
-> he walk
he walks
-> he didn't walk
?
!
?
or !
.clauses() - split-up sentences into multi-term phrases
.chunks() - split-up sentences noun-phrases and verb-phrases
.hyphenated() - all terms connected with a hyphen or dash like 'wash-out'
.phoneNumbers() - things like '(939) 555-0113'
.hashTags() - things like '#nlp'
.emails() - things like 'hi@compromise.cool'
.emoticons() - things like :)
.emojis() - things like 💋
.atMentions() - things like '@nlp_compromise'
.urls() - things like 'compromise.cool'
.pronouns() - things like 'he'
.conjunctions() - things like 'but'
.prepositions() - things like 'of'
.abbreviations() - things like 'Mrs.'
.people() - names like 'John F. Kennedy'
.people().json() - get person-name metadata
.people().parse() - get person-name interpretation
.places() - like 'Paris, France'
.organizations() - like 'Google, Inc'
.topics() - people()
+ places()
+ organizations()
.adjectives() - things like 'quickly'
.adjectives().json() - get adjective metadata
.adverbs() - things like 'quickly'
.adverbs().json() - get adverb metadata
.acronyms() - things like 'FBI'
.acronyms().strip() - remove periods from acronyms
.acronyms().addPeriods() - add periods to acronyms
.parentheses() - return anything inside (parentheses)
.parentheses().strip() - remove brackets
.possessives() - things like "Spencer's"
.possessives().strip() - "Spencer's" -> "Spencer"
.quotations() - return any terms inside paired quotation marks
.quotations().strip() - remove quotation marks
This library comes with a considerate, common-sense baseline for english grammar.
You're free to change, or lay-waste to any settings - which is the fun part actually.
the easiest part is just to suggest tags for any given words:
let myWords = {
kermit: 'FirstName',
fozzie: 'FirstName',
}
let doc = nlp(muppetText, myWords)
or make heavier changes with a compromise-plugin.
import nlp from 'compromise'
nlp.extend({
// add new tags
tags: {
Character: {
isA: 'Person',
notA: 'Adjective',
},
},
// add or change words in the lexicon
words: {
kermit: 'Character',
gonzo: 'Character',
},
// add new methods to compromise
api: (View) => {
View.prototype.kermitVoice = function () {
this.sentences().prepend('well,')
this.match('i [(am|was)]').prepend('um,')
return this
}
}
})
These are some helpful extensions:
npm install compromise-dates
June 8th
or 03/03/18
2 weeks
or 5mins
4:30pm
or half past five
npm install compromise-stats
.tfidf({}) - rank words by frequency and uniqueness
.ngrams({}) - list all repeating sub-phrases, by word-count
.unigrams() - n-grams with one word
.bigrams() - n-grams with two words
.trigrams() - n-grams with three words
.startgrams() - n-grams including the first term of a phrase
.endgrams() - n-grams including the last term of a phrase
.edgegrams() - n-grams including the first or last term of a phrase
npm install compromise-syllables
npm install compromise-wikipedia
we're committed to typescript/deno support, both in main and in the official-plugins:
import nlp from 'compromise'
import stats from 'compromise-stats'
const nlpEx = nlp.extend(stats)
nlpEx('This is type safe!').ngrams({ min: 1 })
slash-support:
We currently split slashes up as different words, like we do for hyphens. so things like this don't work:
nlp('the koala eats/shoots/leaves').has('koala leaves') //false
inter-sentence match:
By default, sentences are the top-level abstraction.
Inter-sentence, or multi-sentence matches aren't supported without a plugin:
nlp("that's it. Back to Winnipeg!").has('it back')//false
nested match syntax:
the danger beauty of regex is that you can recurse indefinitely.
Our match syntax is much weaker. Things like this are not (yet) possible:
doc.match('(modern (major|minor))? general')
complex matches must be achieved with successive .match() statements.
dependency parsing: Proper sentence transformation requires understanding the syntax tree of a sentence, which we don't currently do. We should! Help wanted with this.
en-pos - very clever javascript pos-tagger by Alex Corvi
naturalNode - fancier statistical nlp in javascript
dariusk/pos-js - fastTag fork in javascript
compendium-js - POS and sentiment analysis in javascript
nodeBox linguistics - conjugation, inflection in javascript
reText - very impressive text utilities in javascript
superScript - conversation engine in js
jsPos - javascript build of the time-tested Brill-tagger
spaCy - speedy, multilingual tagger in C/python
Prose - quick tagger in Go by Joseph Kato
MIT
FAQs
modest natural language processing
The npm package compromise receives a total of 41,136 weekly downloads. As such, compromise popularity was classified as popular.
We found that compromise demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
We're excited to introduce Socket Optimize, a powerful CLI command to secure open source dependencies with tested, optimized package overrides.
Product
We're excited to announce that Socket now supports the Java programming language.
Security News
Socket detected a malicious Python package impersonating a popular browser cookie library to steal passwords, screenshots, webcam images, and Discord tokens.