spaCy WordNet
spaCy Wordnet is a simple custom component for using WordNet, MultiWordnet and WordNet domains with spaCy.
The component combines the NLTK wordnet interface with WordNet domains to allow users to:
-
Get all synsets for a processed token. For example, getting all the synsets (word senses) of the word bank
.
-
Get and filter synsets by domain. For example, getting synonyms of the verb withdraw
in the financial domain.
Getting started
The spaCy WordNet component can be easily integrated into spaCy pipelines. You just need the following:
Prerequisites
You also need to install the following NLTK wordnet data:
python -m nltk.downloader wordnet
python -m nltk.downloader omw
Install
pip install spacy-wordnet
Supported languages
Almost all Open Multi Wordnet languages are supported.
Usage
Once you choose the desired language (from the list of supported ones above), you will need to manually download a spaCy model for it. Check the list of available models for each language at SpaCy 2.x or SpaCy 3.x.
English example
Download example model:
python -m spacy download en_core_web_sm
Run:
import spacy
from spacy_wordnet.wordnet_annotator import WordnetAnnotator
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe("spacy_wordnet", after='tagger')
token = nlp('prices')[0]
token._.wordnet.synsets()
token._.wordnet.lemmas()
token._.wordnet.wordnet_domains()
spaCy WordNet lets you find synonyms by domain of interest for example economy
economy_domains = ['finance', 'banking']
enriched_sentence = []
sentence = nlp('I want to withdraw 5,000 euros')
for token in sentence:
synsets = token._.wordnet.wordnet_synsets_for_domain(economy_domains)
if not synsets:
enriched_sentence.append(token.text)
else:
lemmas_for_synset = [lemma for s in synsets for lemma in s.lemma_names()]
enriched_sentence.append('({})'.format('|'.join(set(lemmas_for_synset))))
print(' '.join(enriched_sentence))
Portuguese example
Download example model:
python -m spacy download pt_core_news_sm
Run:
import spacy
from spacy_wordnet.wordnet_annotator import WordnetAnnotator
nlp = spacy.load('pt_core_news_sm')
nlp.add_pipe("spacy_wordnet", after='tagger', config={'lang': nlp.lang})
text = "Eu quero retirar 5.000 euros"
economy_domains = ['finance', 'banking']
enriched_sentence = []
sentence = nlp(text)
for token in sentence:
synsets = token._.wordnet.wordnet_synsets_for_domain(economy_domains)
if not synsets:
enriched_sentence.append(token.text)
else:
lemmas_for_synset = [lemma for s in synsets for lemma in s.lemma_names('por')]
enriched_sentence.append('({})'.format('|'.join(set(lemmas_for_synset))))
print(' '.join(enriched_sentence))