Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

hdbscan

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

hdbscan

Clustering based on density with variable density clusters

  • 0.8.39
  • PyPI
  • Socket score

Maintainers
3

.. image:: https://img.shields.io/pypi/v/hdbscan.svg :target: https://pypi.python.org/pypi/hdbscan/ :alt: PyPI Version .. image:: https://anaconda.org/conda-forge/hdbscan/badges/version.svg :target: https://anaconda.org/conda-forge/hdbscan :alt: Conda-forge Version .. image:: https://anaconda.org/conda-forge/hdbscan/badges/downloads.svg :target: https://anaconda.org/conda-forge/hdbscan :alt: Conda-forge downloads .. image:: https://img.shields.io/pypi/l/hdbscan.svg :target: https://github.com/scikit-learn-contrib/hdbscan/blob/master/LICENSE :alt: License .. image:: https://travis-ci.org/scikit-learn-contrib/hdbscan.svg :target: https://travis-ci.org/scikit-learn-contrib/hdbscan :alt: Travis Build Status .. image:: https://codecov.io/gh/scikit-learn-contrib/hdbscan/branch/master/graph/badge.svg :target: https://codecov.io/gh/scikit-learn-contrib/hdbscan :alt: Test Coverage .. image:: https://readthedocs.org/projects/hdbscan/badge/?version=latest :target: https://hdbscan.readthedocs.org :alt: Docs .. image:: http://joss.theoj.org/papers/10.21105/joss.00205/status.svg :target: http://joss.theoj.org/papers/10.21105/joss.00205 :alt: JOSS article .. image:: https://mybinder.org/badge.svg :target: https://mybinder.org/v2/gh/scikit-learn-contrib/hdbscan :alt: Launch example notebooks in Binder

======= HDBSCAN

HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection.

In practice this means that HDBSCAN returns a good clustering straight away with little or no parameter tuning -- and the primary parameter, minimum cluster size, is intuitive and easy to select.

HDBSCAN is ideal for exploratory data analysis; it's a fast and robust algorithm that you can trust to return meaningful clusters (if there are any).

Based on the papers:

McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* 
In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
2017 `[pdf] <http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8215642>`_

R. Campello, D. Moulavi, and J. Sander, *Density-Based Clustering Based on
Hierarchical Density Estimates*
In: Advances in Knowledge Discovery and Data Mining, Springer, pp 160-172.
2013

Documentation, including tutorials, are available on ReadTheDocs at http://hdbscan.readthedocs.io/en/latest/ .

Notebooks comparing HDBSCAN to other clustering algorithms <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Comparing%20Clustering%20Algorithms.ipynb>, explaining how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb> and comparing performance with other python clustering implementations <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>_ are available.


How to use HDBSCAN

The hdbscan package inherits from sklearn classes, and thus drops in neatly next to other sklearn clusterers with an identical calling API. Similarly it supports input in a variety of formats: an array (or pandas dataframe, or sparse matrix) of shape (num_samples x num_features); an array (or sparse matrix) giving a distance matrix between samples.

.. code:: python

import hdbscan
from sklearn.datasets import make_blobs

data, _ = make_blobs(1000)

clusterer = hdbscan.HDBSCAN(min_cluster_size=10)
cluster_labels = clusterer.fit_predict(data)

Performance

Significant effort has been put into making the hdbscan implementation as fast as possible. It is orders of magnitude faster than the reference implementation <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Python%20vs%20Java.ipynb>_ in Java, and is currently faster than highly optimized single linkage implementations in C and C++. version 0.7 performance can be seen in this notebook <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations-v0.7.ipynb>_ . In particular performance on low dimensional data is better than sklearn's DBSCAN <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/Benchmarking%20scalability%20of%20clustering%20implementations%202D%20v0.7.ipynb>_ , and via support for caching with joblib, re-clustering with different parameters can be almost free.


Additional functionality

The hdbscan package comes equipped with visualization tools to help you understand your clustering results. After fitting data the clusterer object has attributes for:

  • The condensed cluster hierarchy
  • The robust single linkage cluster hierarchy
  • The reachability distance minimal spanning tree

All of which come equipped with methods for plotting and converting to Pandas or NetworkX for further analysis. See the notebook on how HDBSCAN works <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20HDBSCAN%20Works.ipynb>_ for examples and further details.

The clusterer objects also have an attribute providing cluster membership strengths, resulting in optional soft clustering (and no further compute expense). Finally each cluster also receives a persistence score giving the stability of the cluster over the range of distance scales present in the data. This provides a measure of the relative strength of clusters.


Outlier Detection

The HDBSCAN clusterer objects also support the GLOSH outlier detection algorithm. After fitting the clusterer to data the outlier scores can be accessed via the outlier_scores_ attribute. The result is a vector of score values, one for each data point that was fit. Higher scores represent more outlier like objects. Selecting outliers via upper quantiles is often a good approach.

Based on the paper: R.J.G.B. Campello, D. Moulavi, A. Zimek and J. Sander Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection, ACM Trans. on Knowledge Discovery from Data, Vol 10, 1 (July 2015), 1-51.


Robust single linkage

The hdbscan package also provides support for the robust single linkage clustering algorithm of Chaudhuri and Dasgupta. As with the HDBSCAN implementation this is a high performance version of the algorithm outperforming scipy's standard single linkage implementation. The robust single linkage hierarchy is available as an attribute of the robust single linkage clusterer, again with the ability to plot or export the hierarchy, and to extract flat clusterings at a given cut level and gamma value.

Example usage:

.. code:: python

import hdbscan
from sklearn.datasets import make_blobs

data, _ = make_blobs(1000)

clusterer = hdbscan.RobustSingleLinkage(cut=0.125, k=7)
cluster_labels = clusterer.fit_predict(data)
hierarchy = clusterer.cluster_hierarchy_
alt_labels = hierarchy.get_clusters(0.100, 5)
hierarchy.plot()

Based on the paper: K. Chaudhuri and S. Dasgupta. "Rates of convergence for the cluster tree." In Advances in Neural Information Processing Systems, 2010.


Branch detection

The hdbscan package supports a branch-detection post-processing step by Bot et al. <https://arxiv.org/abs/2311.15887>. Cluster shapes, such as branching structures, can reveal interesting patterns that are not expressed in density-based cluster hierarchies. The BranchDetector class mimics the HDBSCAN API and can be used to detect branching hierarchies in clusters. It provides condensed branch hierarchies, branch persistences, and branch memberships and supports joblib's caching functionality. A notebook demonstrating the BranchDetector is available <http://nbviewer.jupyter.org/github/scikit-learn-contrib/hdbscan/blob/master/notebooks/How%20to%20detect%20branches.ipynb>.

Example usage:

.. code:: python

import hdbscan
from sklearn.datasets import make_blobs

data, _ = make_blobs(1000)

clusterer = hdbscan.HDBSCAN(branch_detection_data=True).fit(data)
branch_detector = hdbscan.BranchDetector().fit(clusterer)
branch_detector.cluster_approximation_graph_.plot(edge_width=0.1)

Based on the paper: D. M. Bot, J. Peeters, J. Liesenborgs and J. Aerts "FLASC: A Flare-Sensitive Clustering Algorithm: Extending HDBSCAN* for Detecting Branches in Clusters" Arxiv 2311.15887, 2023.


Installing

Easiest install, if you have Anaconda (thanks to conda-forge which is awesome!):

.. code:: bash

conda install -c conda-forge hdbscan

PyPI install, presuming you have an up to date pip:

.. code:: bash

pip install hdbscan

Binary wheels for a number of platforms are available thanks to the work of Ryan Helinski rlhelinski@gmail.com.

If pip is having difficulties pulling the dependencies then we'd suggest to first upgrade pip to at least version 10 and try again:

.. code:: bash

pip install --upgrade pip
pip install hdbscan

Otherwise install the dependencies manually using anaconda followed by pulling hdbscan from pip:

.. code:: bash

conda install cython
conda install numpy scipy
conda install scikit-learn
pip install hdbscan

For a manual install of the latest code directly from GitHub:

.. code:: bash

pip install --upgrade git+https://github.com/scikit-learn-contrib/hdbscan.git#egg=hdbscan

Alternatively download the package, install requirements, and manually run the installer:

.. code:: bash

wget https://github.com/scikit-learn-contrib/hdbscan/archive/master.zip
unzip master.zip
rm master.zip
cd hdbscan-master

pip install -r requirements.txt

python setup.py install

Running the Tests

The package tests can be run after installation using the command:

.. code:: bash

nosetests -s hdbscan

or, if nose is installed but nosetests is not in your PATH variable:

.. code:: bash

python -m nose -s hdbscan

If one or more of the tests fail, please report a bug at https://github.com/scikit-learn-contrib/hdbscan/issues/new


Python Version

The hdbscan library supports both Python 2 and Python 3. However we recommend Python 3 as the better option if it is available to you.


Help and Support

For simple issues you can consult the FAQ <https://hdbscan.readthedocs.io/en/latest/faq.html>_ in the documentation. If your issue is not suitably resolved there, please check the issues <https://github.com/scikit-learn-contrib/hdbscan/issues>_ on github. Finally, if no solution is available there feel free to open an issue <https://github.com/scikit-learn-contrib/hdbscan/issues/new>_ ; the authors will attempt to respond in a reasonably timely fashion.


Contributing

We welcome contributions in any form! Assistance with documentation, particularly expanding tutorials, is always welcome. To contribute please fork the project <https://github.com/scikit-learn-contrib/hdbscan/issues#fork-destination-box>_ make your changes and submit a pull request. We will do our best to work through any issues with you and get your code merged into the main branch.


Citing

If you have used this codebase in a scientific publication and wish to cite it, please use the Journal of Open Source Software article <http://joss.theoj.org/papers/10.21105/joss.00205>_.

L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering*
In: Journal of Open Source Software, The Open Journal, volume 2, number 11.
2017

.. code:: bibtex

@article{mcinnes2017hdbscan,
  title={hdbscan: Hierarchical density based clustering},
  author={McInnes, Leland and Healy, John and Astels, Steve},
  journal={The Journal of Open Source Software},
  volume={2},
  number={11},
  pages={205},
  year={2017}
}

To reference the high performance algorithm developed in this library please cite our paper in ICDMW 2017 proceedings.

McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering* 
In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
2017

.. code:: bibtex

@inproceedings{mcinnes2017accelerated,
  title={Accelerated Hierarchical Density Based Clustering},
  author={McInnes, Leland and Healy, John},
  booktitle={Data Mining Workshops (ICDMW), 2017 IEEE International Conference on},
  pages={33--42},
  year={2017},
  organization={IEEE}
}

If you used the branch-detection functionality in this codebase in a scientific publication and which to cite it, please use the Arxiv preprint <https://arxiv.org/abs/2311.15887>_:

D. M. Bot, J. Peeters, J. Liesenborgs and J. Aerts
*"FLASC: A Flare-Sensitive Clustering Algorithm: Extending HDBSCAN\* for Detecting Branches in Clusters"*
Arxiv 2311.15887, 2023.

.. code:: bibtex

@misc{bot2023flasc,
    title={FLASC: A Flare-Sensitive Clustering Algorithm: Extending HDBSCAN* for Detecting Branches in Clusters}, 
    author={D. M. Bot and J. Peeters and J. Liesenborgs and J. Aerts},
    year={2023},
    eprint={2311.15887},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2311.15887}, 
}

Licensing

The hdbscan package is 3-clause BSD licensed. Enjoy.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc