Permutation Feature Selector

Acknowledgements
This library is inspired by the functionality and design of Scikit-learn's permutation importance.
Links
PyPI:https://pypi.org/project/permutation_feature_selector/
Installation
To install the Library,
Standard Installation
You can install the package directly from PyPI using pip. This is the easiest and most common method:
$ pip install permutation_feature_selector
Installation from Source
If you prefer to install from the source or if you want the latest version that may not yet be released on PyPI, you can use the following commands:
$ git clone https://github.com/Itsuki-2822/permutation_feature_selector.git
$ cd permutation_feature_selector
$ python setup.py install
For developer
If you are a contributor or if you want to install the latest development version of the library, use the following command to install directly from the GitHub repository:
$ pip install --upgrade git+https://github.com/Itsuki-2822/permutation_feature_selector
What Permutation Importance
Basic Concept
The calculation of permutation importance proceeds through the following steps:
-
Evaluate Model Performance:
- Measure the performance metric (e.g., accuracy or error) of the model using the original dataset before any permutation.
-
Shuffle the Feature:
- Randomly shuffle the order of values in one feature of the dataset. This disrupts the relationship between that feature and the target variable.
-
Re-evaluate Performance:
- Assess the model's performance again, using the dataset with the shuffled feature.
-
Calculate Importance:
- Compute the difference in performance before and after the permutation. A larger difference indicates that the feature is more "important."
Model-Independent Advantage
Permutation importance is independent of the internal mechanisms of any specific model, which means it does not rely on the evaluation mechanisms specific to models like gradient boosting or decision trees. It can be applied across various predictive models (linear models, decision trees, neural networks, etc.)
Considerations for Use
-
Randomness:
- Since the feature shuffling is a random process, the results may vary slightly each time. To obtain stable evaluations, averaging several assessments is recommended.
-
Correlated Features:
- If multiple features are strongly correlated, their importance may be underestimated. Addressing this issue may require careful feature selection and engineering.
Examples
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
import lightgbm as lgb
import matplotlib.pyplot as plt
from permutation_feature_selector import PermutationFeatureSelector
data = load_diabetes()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = lgb.LGBMRegressor()
model.fit(X_train, y_train)
selector = PermutationFeatureSelector(model, X_test, y_test, metric='rmse', n_repeats=30, random_state=42)
perm_importance = selector.calculate_permutation_importance()
print(perm_importance)
selector.plot_permutation_importance()
chosen_features, chosen_features_df = selector.choose_feat(threshold_method='mean', threshold_value=1.0)
print(chosen_features)
print(chosen_features_df)
References
scikit-learn.org:
medium.com:
hacarus.github.io: