Spacy Entity Linker
Introduction
Spacy Entity Linker is a pipeline for spaCy that performs Linked Entity Extraction with Wikidata on a given Document.
The Entity Linking System operates by matching potential candidates from each sentence
(subject, object, prepositional phrase, compounds, etc.) to aliases from Wikidata. The package allows to easily find the
category behind each entity (e.g. "banana" is type "food" OR "Microsoft" is type "company"). It can is therefore useful
for information extraction tasks and labeling tasks.
The package was written before a working Linked Entity Solution existed inside spaCy. In comparison to spaCy's linked
entity system, it has the following advantages:
- no extensive training required (entity-matching via database)
- knowledge base can be dynamically updated without retraining
- entity categories can be easily resolved
- grouping entities by category
It also comes along with a number of disadvantages:
- it is slower than the spaCy implementation due to the use of a database for finding entities
- no context sensitivity due to the implementation of the "max-prior method" for entitiy disambiguation (an improved
method for this is in progress)
Installation
To install the package, run:
pip install spacy-entity-linker
Afterwards, the knowledge base (Wikidata) must be downloaded. This can be either be done by manually calling
python -m spacy_entity_linker "download_knowledge_base"
or when you first access the entity linker through spacy.
This will download and extract a ~1.3GB file that contains a preprocessed version of Wikidata.
Use
import spacy
nlp = spacy.load("en_core_web_md")
nlp.add_pipe("entityLinker", last=True)
doc = nlp("I watched the Pirates of the Caribbean last silvester")
all_linked_entities = doc._.linkedEntities
for sent in doc.sents:
sent._.linkedEntities.pretty_print()
EntityCollection
contains an array of entity elements. It can be accessed like an array but also implements the following helper
functions:
pretty_print()
prints out information about all contained entitiesprint_super_classes()
groups and prints all entites by their super class
doc = nlp("Elon Musk was born in South Africa. Bill Gates and Steve Jobs come from the United States")
doc._.linkedEntities.print_super_entities()
EntityElement
each linked Entity is an object of type EntityElement
. Each entity contains the methods
get_description()
returns description from Wikidataget_id()
returns Wikidata IDget_label()
returns Wikidata labelget_span()
returns the span from the spacy document that contains the linked entityget_url()
returns the url to the corresponding Wikidata itempretty_print()
prints out information about the entity elementget_sub_entities(limit=10)
returns EntityCollection of all entities that derive from the current
entityElement (e.g. fruit -> apple, banana, etc.)get_super_entities(limit=10)
returns EntityCollection of all entities that the current entityElement
derives from (e.g. New England Patriots -> Football Team))
Example
In the following example we will use SpacyEntityLinker to find find the mentioned Football Team in our text and explore
other football teams of the same type
doc = nlp("I follow the New England Patriots")
patriots_entity = doc._.linkedEntities[0]
patriots_entity.pretty_print()
football_team_entity = patriots_entity.get_super_entities()[0]
football_team_entity.pretty_print()
for child in football_team_entity.get_sub_entities(limit=32):
print(child)
Entity Linking Policy
Currently the only method for choosing an entity given different possible matches (e.g. Paris - city vs Paris -
firstname) is max-prior. This method achieves around 70% accuracy on predicting the correct entities behind link
descriptions on wikipedia.
Note
The Entity Linker at the current state is still experimental and should not be used in production mode.
Performance
The current implementation supports only Sqlite. This is advantageous for development because it does not requirement
any special setup and configuration. However, for more performance critical usecases, a different database with
in-memory access (e.g. Redis) should be used. This may be implemented in the future.
Data
the knowledge base was derived from this dataset: https://www.kaggle.com/kenshoresearch/kensho-derived-wikimedia-data
It was cleaned and post-procesed, including filtering out entities of "overrepresented" categories such as
- village in China
- train stations
- stars in the Galaxy
- etc.
The purpose behind the knowledge base cleaning was to reduce the knowledge base size, while keeping the most useful entities for general purpose applications.
Currently, the only way to change the knowledge base is a bit hacky and requires to replace or modify the underlying sqlite database. You will find it under site_packages/data_spacy_entity_linker/wikidb_filtered.db
. The database contains 3 tables:
- aliases
- en_alias (english alias)
- en_alias_lowercase (english alias lowercased)
- joined
- en_label (label of the wikidata item)
- views (number of views of the corresponding wikipedia page (in a given period of time))
- inlinks (number of inlinks to the corresponding wikipedia page)
- item_id (wikidata id)
- description (description of the wikidata item)
- statements
- source_item_id (references item_id)
- target_item_id (references item_id)
- edge_property_id
Versions:
spacy_entity_linker>=0.0
(requires spacy>=2.2,<3.0
)spacy_entity_linker>=1.0
(requires spacy>=3.0
)
TODO