Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

textdistance

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

textdistance

Compute distance between the two texts.

  • 4.6.3
  • PyPI
  • Socket score

Maintainers
1

TextDistance

TextDistance logo

Build Status PyPI version Status License

TextDistance -- python library for comparing distance between two or more sequences by many algorithms.

Features:

  • 30+ algorithms
  • Pure python implementation
  • Simple usage
  • More than two sequences comparing
  • Some algorithms have more than one implementation in one class.
  • Optional numpy usage for maximum speed.

Algorithms

Edit based

AlgorithmClassFunctions
HammingHamminghamming
MLIPNSMLIPNSmlipns
LevenshteinLevenshteinlevenshtein
Damerau-LevenshteinDamerauLevenshteindamerau_levenshtein
Jaro-WinklerJaroWinklerjaro_winkler, jaro
Strcmp95StrCmp95strcmp95
Needleman-WunschNeedlemanWunschneedleman_wunsch
GotohGotohgotoh
Smith-WatermanSmithWatermansmith_waterman

Token based

AlgorithmClassFunctions
Jaccard indexJaccardjaccard
Sørensen–Dice coefficientSorensensorensen, sorensen_dice, dice
Tversky indexTverskytversky
Overlap coefficientOverlapoverlap
Tanimoto distanceTanimototanimoto
Cosine similarityCosinecosine
Monge-ElkanMongeElkanmonge_elkan
Bag distanceBagbag

Sequence based

AlgorithmClassFunctions
longest common subsequence similarityLCSSeqlcsseq
longest common substring similarityLCSStrlcsstr
Ratcliff-Obershelp similarityRatcliffObershelpratcliff_obershelp

Compression based

Normalized compression distance with different compression algorithms.

Classic compression algorithms:

AlgorithmClassFunction
Arithmetic codingArithNCDarith_ncd
RLERLENCDrle_ncd
BWT RLEBWTRLENCDbwtrle_ncd

Normal compression algorithms:

AlgorithmClassFunction
Square RootSqrtNCDsqrt_ncd
EntropyEntropyNCDentropy_ncd

Work in progress algorithms that compare two strings as array of bits:

AlgorithmClassFunction
BZ2BZ2NCDbz2_ncd
LZMALZMANCDlzma_ncd
ZLibZLIBNCDzlib_ncd

See blog post for more details about NCD.

Phonetic

AlgorithmClassFunctions
MRAMRAmra
EditexEditexeditex

Simple

AlgorithmClassFunctions
Prefix similarityPrefixprefix
Postfix similarityPostfixpostfix
Length distanceLengthlength
Identity similarityIdentityidentity
Matrix similarityMatrixmatrix

Installation

Stable

Only pure python implementation:

pip install textdistance

With extra libraries for maximum speed:

pip install "textdistance[extras]"

With all libraries (required for benchmarking and testing):

pip install "textdistance[benchmark]"

With algorithm specific extras:

pip install "textdistance[Hamming]"

Algorithms with available extras: DamerauLevenshtein, Hamming, Jaro, JaroWinkler, Levenshtein.

Dev

Via pip:

pip install -e git+https://github.com/life4/textdistance.git#egg=textdistance

Or clone repo and install with some extras:

git clone https://github.com/life4/textdistance.git
pip install -e ".[benchmark]"

Usage

All algorithms have 2 interfaces:

  1. Class with algorithm-specific params for customizing.
  2. Class instance with default params for quick and simple usage.

All algorithms have some common methods:

  1. .distance(*sequences) -- calculate distance between sequences.
  2. .similarity(*sequences) -- calculate similarity for sequences.
  3. .maximum(*sequences) -- maximum possible value for distance and similarity. For any sequence: distance + similarity == maximum.
  4. .normalized_distance(*sequences) -- normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different.
  5. .normalized_similarity(*sequences) -- normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.

Most common init arguments:

  1. qval -- q-value for split sequences into q-grams. Possible values:
    • 1 (default) -- compare sequences by chars.
    • 2 or more -- transform sequences to q-grams.
    • None -- split sequences by words.
  2. as_set -- for token-based algorithms:
    • True -- t and ttt is equal.
    • False (default) -- t and ttt is different.

Examples

For example, Hamming distance:

import textdistance

textdistance.hamming('test', 'text')
# 1

textdistance.hamming.distance('test', 'text')
# 1

textdistance.hamming.similarity('test', 'text')
# 3

textdistance.hamming.normalized_distance('test', 'text')
# 0.25

textdistance.hamming.normalized_similarity('test', 'text')
# 0.75

textdistance.Hamming(qval=2).distance('test', 'text')
# 2

Any other algorithms have same interface.

Articles

A few articles with examples how to use textdistance in the real world:

Extra libraries

For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences). Install textdistance with extras for this feature.

You can disable this by passing external=False argument on init:

import textdistance
hamming = textdistance.Hamming(external=False)
hamming('text', 'testit')
# 3

Supported libraries:

  1. Distance
  2. jellyfish
  3. py_stringmatching
  4. pylev
  5. Levenshtein
  6. pyxDamerauLevenshtein

Algorithms:

  1. DamerauLevenshtein
  2. Hamming
  3. Jaro
  4. JaroWinkler
  5. Levenshtein

Benchmarks

Without extras installation:

algorithmlibrarytime
DamerauLevenshteinrapidfuzz0.00312
DamerauLevenshteinjellyfish0.00591
DamerauLevenshteinpyxdameraulevenshtein0.03335
DamerauLevenshteintextdistance0.83524
HammingLevenshtein0.00038
Hammingrapidfuzz0.00044
Hammingjellyfish0.00091
Hammingdistance0.00812
Hammingtextdistance0.03531
Jarorapidfuzz0.00092
Jarojellyfish0.00191
Jarotextdistance0.07365
JaroWinklerrapidfuzz0.00094
JaroWinklerjellyfish0.00195
JaroWinklertextdistance0.07501
Levenshteinrapidfuzz0.00099
LevenshteinLevenshtein0.00122
Levenshteinjellyfish0.00254
Levenshteinpylev0.15688
Levenshteindistance0.28669
Levenshteintextdistance0.53902

Total: 24 libs.

Yeah, so slow. Use TextDistance on production only with extras.

Textdistance use benchmark's results for algorithm's optimization and try to call fastest external lib first (if possible).

You can run benchmark manually on your system:

pip install textdistance[benchmark]
python3 -m textdistance.benchmark

TextDistance show benchmarks results table for your system and save libraries priorities into libraries.json file in TextDistance's folder. This file will be used by textdistance for calling fastest algorithm implementation. Default libraries.json already included in package.

Running tests

All you need is task. See Taskfile.yml for the list of available commands. For example, to run tests including third-party libraries usage, execute task pytest-external:run.

Contributing

PRs are welcome!

  • Found a bug? Fix it!
  • Want to add more algorithms? Sure! Just make it with the same interface as other algorithms in the lib and add some tests.
  • Can make something faster? Great! Just avoid external dependencies and remember that everything should work not only with strings.
  • Something else that do you think is good? Do it! Just make sure that CI passes and everything from the README is still applicable (interface, features, and so on).
  • Have no time to code? Tell your friends and subscribers about textdistance. More users, more contributions, more amazing features.

Thank you :heart:

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc