
Security News
NVD Quietly Sweeps 100K+ CVEs Into a βDeferredβ Black Hole
NVD now marks all pre-2018 CVEs as "Deferred," signaling it will no longer enrich older vulnerabilities, further eroding trust in its data.
Python package for concise, transparent, and accurate predictive modeling.
All sklearn-compatible and easy to use.
For interpretability in NLP, check out our new package: imodelsX
π docs β’ π demo notebooks
Modern machine-learning models are increasingly complex, often making them difficult to interpret. This package provides a simple interface for fitting and using state-of-the-art interpretable models, all compatible with scikit-learn. These models can often replace black-box models (e.g. random forests) with simpler models (e.g. rule lists) while improving interpretability and computational efficiency, all without sacrificing predictive accuracy! Simply import a classifier or regressor and use the fit
and predict
methods, same as standard scikit-learn models.
from sklearn.model_selection import train_test_split
from imodels import get_clean_dataset, HSTreeClassifierCV # import any imodels model here
# prepare data (a sample clinical dataset)
X, y, feature_names = get_clean_dataset('csi_pecarn_pred')
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42)
# fit the model
model = HSTreeClassifierCV(max_leaf_nodes=4) # initialize a tree model and specify only 4 leaf nodes
model.fit(X_train, y_train, feature_names=feature_names) # fit model
preds = model.predict(X_test) # discrete predictions: shape is (n_test, 1)
preds_proba = model.predict_proba(X_test) # predicted probabilities: shape is (n_test, n_classes)
print(model) # print the model
------------------------------
Decision Tree with Hierarchical Shrinkage
Prediction is made by looking at the value in the appropriate leaf of the tree
------------------------------
|--- FocalNeuroFindings2 <= 0.50
| |--- HighriskDiving <= 0.50
| | |--- Torticollis2 <= 0.50
| | | |--- value: [0.10]
| | |--- Torticollis2 > 0.50
| | | |--- value: [0.30]
| |--- HighriskDiving > 0.50
| | |--- value: [0.68]
|--- FocalNeuroFindings2 > 0.50
| |--- value: [0.42]
Install with pip install imodels
(see here for help).
ποΈ Docs β π Research paper β π Reference code implementation
Model | Reference | Description |
---|---|---|
Rulefit rule set | ποΈ, π, π | Fits a sparse linear model on rules extracted from decision trees |
Skope rule set | ποΈ, π | Extracts rules from gradient-boosted trees, deduplicates them, then linearly combines them based on their OOB precision |
Boosted rule set | ποΈ, π, π | Sequentially fits a set of rules with Adaboost |
Slipper rule set | ποΈ, π | Sequentially learns a set of rules with SLIPPER |
Bayesian rule set | ποΈ, π, π | Finds concise rule set with Bayesian sampling (slow) |
Optimal rule list | ποΈ, π, π | Fits rule list using global optimization for sparsity (CORELS) |
Bayesian rule list | ποΈ, π, π | Fits compact rule list distribution with Bayesian sampling (slow) |
Greedy rule list | ποΈ, π | Uses CART to fit a list (only a single path), rather than a tree |
OneR rule list | ποΈ, π | Fits rule list restricted to only one feature |
Optimal rule tree | ποΈ, π, π | Fits succinct tree using global optimization for sparsity (GOSDT) |
Greedy rule tree | ποΈ, π, π | Greedily fits tree using CART |
C4.5 rule tree | ποΈ, π, π | Greedily fits tree using C4.5 |
TAO rule tree | ποΈ, π | Fits tree using alternating optimization |
Iterative random forest | ποΈ, π, π | Repeatedly fit random forest, giving features with high importance a higher chance of being selected |
Sparse integer linear model | ποΈ, π | Sparse linear model with integer coefficients |
Tree GAM | ποΈ, π, π | Generalized additive model fit with short boosted trees |
Greedy tree sums (FIGS) | ποΈ,γ €π | Sum of small trees with very few total rules (FIGS) |
Hierarchical shrinkage wrapper | ποΈ, π | Improve a decision tree, random forest, or gradient-boosting ensemble with ultra-fast, post-hoc regularization |
RF+ (MDI+) | ποΈ, π | Flexible random forest-based feature importance |
Distillation wrapper | ποΈ | Train a black-box model, then distill it into an interpretable model |
AutoML wrapper | ποΈ | Automatically fit and select an interpretable model |
More models | β | (Coming soon!) Lightweight Rule Induction, MLRules, ... |
Demos are contained in the notebooks folder.
imodels
for deriving a clinical decision rule
The final form of the above models takes one of the following forms, which aim to be simultaneously simple to understand and highly predictive:
Rule set | Rule list | Rule tree | Algebraic models |
---|---|---|---|
![]() | ![]() | ![]() | ![]() |
Different models and algorithms vary not only in their final form but also in different choices made during modeling, such as how they generate, select, and postprocess rules:
Rule candidate generation | Rule selection | Rule postprocessing |
---|---|---|
![]() | ![]() | ![]() |
Different models support different machine-learning tasks. Current support for different models is given below (each of these models can be imported directly from imodels (e.g. from imodels import RuleFitClassifier
):
Model | Binary classification | Regression | Notes |
---|---|---|---|
Rulefit rule set | RuleFitClassifier | RuleFitRegressor | |
Skope rule set | SkopeRulesClassifier | ||
Boosted rule set | BoostedRulesClassifier | BoostedRulesRegressor | |
SLIPPER rule set | SlipperClassifier | ||
Bayesian rule set | BayesianRuleSetClassifier | Fails for large problems | |
Optimal rule list (CORELS) | OptimalRuleListClassifier | Requires corels, fails for large problems | |
Bayesian rule list | BayesianRuleListClassifier | ||
Greedy rule list | GreedyRuleListClassifier | ||
OneR rule list | OneRClassifier | ||
Optimal rule tree (GOSDT) | OptimalTreeClassifier | Requires gosdt, fails for large problems | |
Greedy rule tree (CART) | GreedyTreeClassifier | GreedyTreeRegressor | |
C4.5 rule tree | C45TreeClassifier | ||
TAO rule tree | TaoTreeClassifier | TaoTreeRegressor | |
Iterative random forest | IRFClassifier | Requires irf | |
Sparse integer linear model | SLIMClassifier | SLIMRegressor | Requires extra dependencies for speed |
Tree GAM | TreeGAMClassifier | TreeGAMRegressor | |
Greedy tree sums (FIGS) | FIGSClassifier | FIGSRegressor | |
Hierarchical shrinkage | HSTreeClassifierCV | HSTreeRegressorCV | Wraps any sklearn tree-based model |
Distillation | DistilledRegressor | Wraps any sklearn-compatible models | |
AutoML model | AutoInterpretableClassifierοΈ | AutoInterpretableRegressorοΈ |
Discretizer | Reference | Description |
---|---|---|
MDLP | ποΈ, π, π | Discretize using entropy minimization heuristic |
Simple | ποΈ, π | Simple KBins discretization |
Random Forest | ποΈ | Discretize into bins based on random forest split popularity |
After developing and playing with imodels
, we developed a few new models to overcome limitations of existing interpretable models.
π Paper, π Post, π Citation
Fast Interpretable Greedy-Tree Sums (FIGS) is an algorithm for fitting concise rule-based models. Specifically, FIGS generalizes CART to simultaneously grow a flexible number of trees in a summation. The total number of splits across all the trees can be restricted by a pre-specified threshold, keeping the model interpretable. Experiments across a wide array of real-world datasets show that FIGS achieves state-of-the-art prediction performance when restricted to just a few splits (e.g. less than 20).
Example FIGS model. FIGS learns a sum of trees with a flexible number of trees; to make its prediction, it sums the result from each tree.
π Paper (ICML 2022), π Post, π Citation
Hierarchical shrinkage is an extremely fast post-hoc regularization method which works on any decision tree (or tree-based ensemble, such as Random Forest). It does not modify the tree structure, and instead regularizes the tree by shrinking the prediction over each node towards the sample means of its ancestors (using a single regularization parameter). Experiments over a wide variety of datasets show that hierarchical shrinkage substantially increases the predictive performance of individual decision trees and decision-tree ensembles.
HS Example. HS applies post-hoc regularization to any decision tree by shrinking each node towards its parent.
π Paper, π Post, π Citation
MDI+ is a novel feature importance framework, which generalizes the popular mean decrease in impurity (MDI) importance score for random forests. At its core, MDI+ expands upon a recently discovered connection between linear regression and decision trees. In doing so, MDI+ enables practitioners to (1) tailor the feature importance computation to the data/problem structure and (2) incorporate additional features or knowledge to mitigate known biases of decision trees. In both real data case studies and extensive real-data-inspired simulations, MDI+ outperforms commonly used feature importance measures (e.g., MDI, permutation-based scores, and TreeSHAP) by substantional margins.
Please cite the package if you use it in an academic work :)
@software{
imodels2021,
title = {imodels: a python package for fitting interpretable models},
journal = {Journal of Open Source Software},
publisher = {The Open Journal},
year = {2021},
author = {Singh, Chandan and Nasseri, Keyan and Tan, Yan Shuo and Tang, Tiffany and Yu, Bin},
volume = {6},
number = {61},
pages = {3192},
doi = {10.21105/joss.03192},
url = {https://doi.org/10.21105/joss.03192},
}
FAQs
Implementations of various interpretable models
We found that imodels demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
NVD now marks all pre-2018 CVEs as "Deferred," signaling it will no longer enrich older vulnerabilities, further eroding trust in its data.
Research
Security News
Lazarus-linked threat actors expand their npm malware campaign with new RAT loaders, hex obfuscation, and over 5,600 downloads across 11 packages.
Security News
Safari 18.4 adds support for Iterator Helpers and two other TC39 JavaScript features, bringing full cross-browser coverage to key parts of the ECMAScript spec.