
Alibi is a Python library aimed at machine learning model inspection and interpretation.
The focus of the library is to provide high-quality implementations of black-box, white-box, local and global
explanation methods for classification and regression models.
If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.
Table of Contents
Installation and Usage
Alibi can be installed from:
- PyPI or GitHub source (with
pip)
- Anaconda (with
conda/mamba)
With pip
-
Alibi can be installed from PyPI:
pip install alibi
-
Alternatively, the development version can be installed:
pip install git+https://github.com/SeldonIO/alibi.git
-
To take advantage of distributed computation of explanations, install alibi with ray:
pip install alibi[ray]
-
For SHAP support, install alibi as follows:
pip install alibi[shap]
With conda
To install from conda-forge it is recommended to use mamba,
which can be installed to the base conda enviroment with:
conda install mamba -n base -c conda-forge
-
For the standard Alibi install:
mamba install -c conda-forge alibi
-
For distributed computing support:
mamba install -c conda-forge alibi ray
-
For SHAP support:
mamba install -c conda-forge alibi shap
Usage
The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize,
fit and explain steps. We will use the AnchorTabular
explainer to illustrate the API:
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)
explanation = explainer.explain(x)
The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary
containing the explainer metadata and any hyperparameters and data is a dictionary containing everything
related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed
via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies
from method to method so we encourage the reader to become familiar with the
types of methods supported.
Supported Methods
The following tables summarize the possible use cases for each method.
Model Explanations
Model Confidence
These algorithms provide instance-specific scores measuring the model confidence for making a
particular prediction.
Key:
- BB - black-box (only require a prediction function)
- BB* - black-box but assume model is differentiable
- WB - requires white-box model access. There may be limitations on models supported
- TF/Keras - TensorFlow models via the Keras API
- Local - instance specific explanation, why was this prediction made?
- Global - explains the model with respect to a set of instances
- (1) - depending on model
- (2) - may require dimensionality reduction
Prototypes
These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.
References and Examples
-
Accumulated Local Effects (ALE, Apley and Zhu, 2016)
-
Partial Dependence (J.H. Friedman, 2001)
-
Partial Dependence Variance(Greenwell et al., 2018)
-
Permutation Importance(Breiman, 2001; Fisher et al., 2018)
-
Anchor explanations (Ribeiro et al., 2018)
-
Contrastive Explanation Method (CEM, Dhurandhar et al., 2018)
-
Counterfactual Explanations (extension of
Wachter et al., 2017)
-
Counterfactual Explanations Guided by Prototypes (Van Looveren and Klaise, 2019)
-
Model-agnostic Counterfactual Explanations via RL(Samoilescu et al., 2021)
-
Integrated Gradients (Sundararajan et al., 2017)
-
Kernel Shapley Additive Explanations (Lundberg et al., 2017)
-
Tree Shapley Additive Explanations (Lundberg et al., 2020)
-
Trust Scores (Jiang et al., 2018)
-
Linearity Measure
-
ProtoSelect
-
Similarity explanations
Citations
If you use alibi in your research, please consider citing it.
BibTeX entry:
@article{JMLR:v22:21-0017,
author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {181},
pages = {1-7},
url = {http://jmlr.org/papers/v22/21-0017.html}
}