Website
•
Docs
•
Community Slack
💡 What is NannyML?
NannyML is an open-source python library that allows you to estimate post-deployment model performance (without access to targets), detect data drift, and intelligently link data drift alerts back to changes in model performance. Built for data scientists, NannyML has an easy-to-use interface, interactive visualizations, is completely model-agnostic and currently supports all tabular use cases, classification and regression.
The core contributors of NannyML have researched and developed multiple novel algorithms for estimating model performance: confidence-based performance estimation (CBPE) and direct loss estimation (DLE).
The nansters also invented a new approach to detect multivariate data drift using PCA-based data reconstruction.
If you like what we are working on, be sure to become a Nanster yourself, join our community slack and support us with a GitHub star ⭐.
☔ Why use NannyML?
NannyML closes the loop with performance monitoring and post deployment data science, empowering data scientist to quickly understand and automatically detect silent model failure. By using NannyML, data scientists can finally maintain complete visibility and trust in their deployed machine learning models.
Allowing you to have the following benefits:
- End sleepless nights caused by not knowing your model performance 😴
- Analyse data drift and model performance over time
- Discover the root cause to why your models are not performing as expected
- No alert fatigue! React only when necessary if model performance is impacted
- Painless setup in any environment
🧠 GO DEEP
🔱 Features
1. Performance estimation and monitoring
When the actual outcome of your deployed prediction models is delayed, or even when post-deployment target labels are completely absent, you can use NannyML's CBPE-algorithm to estimate model performance for classification or NannyML's DLE-algorithm for regression. These algorithms provide you with any estimated metric you would like, i.e. ROC AUC or RSME. Rather than estimating the performance of future model predictions, CBPE and DLE estimate the expected model performance of the predictions made at inference time.
NannyML can also track the realised performance of your machine learning model once targets are available.
2. Data drift detection
To detect multivariate feature drift NannyML uses PCA-based data reconstruction. Changes in the resulting reconstruction error are monitored over time and data drift alerts are logged when the reconstruction error in a certain period exceeds a threshold. This threshold is calculated based on the reconstruction error observed in the reference period.
NannyML utilises statistical tests to detect univariate feature drift. We have just added a bunch of new univariate tests including Jensen-Shannon Distance and L-Infinity Distance, check out the comprehensive list. The results of these tests are tracked over time, properly corrected to counteract multiplicity and overlayed on the temporal feature distributions. (It is also possible to visualise the test-statistics over time, to get a notion of the drift magnitude.)
NannyML uses the same statistical tests to detected model output drift.
Target distribution drift can also be monitored using the same statistical tests. Bear in mind that this operation requires the presence of actuals.
3. Intelligent alerting
Because NannyML can estimate performance, it is possible to weed out data drift alerts that do not impact expected performance, combatting alert fatigue. Besides linking data drift issues to drops in performance it is also possible to prioritise alerts according to other criteria using NannyML's Ranker.
🚀 Getting started
Install NannyML
NannyML depends on LightGBM. This might require you to set install additional
OS-specific binaries. You can follow the official installation guide.
From PyPI:
pip install nannyml
From Conda:
conda install -c conda-forge nannyml
Running via Docker:
docker -v /local/config/dir/:/config/ run nannyml/nannyml nml run
Here be dragons! Use the latest development version of NannyML at your own risk:
python -m pip install git+https://github.com/NannyML/nannyml
If you're using database connections to read model inputs/outputs or you're exporting monitoring results to a database,
you'll need to include the optional db
dependency. For example using pip
:
pip install nannyml[db]
or using poetry
poetry install nannyml --all-extras
Quick Start
The following snippet is based on our latest release.
import nannyml as nml
import pandas as pd
from IPython.display import display
reference_df, analysis_df, _ = nml.load_us_census_ma_employment_data()
display(reference_df.head())
display(analysis_df.head())
chunk_size = 5000
estimator = nml.CBPE(
problem_type='classification_binary',
y_pred_proba='predicted_probability',
y_pred='prediction',
y_true='employed',
metrics=['roc_auc'],
chunk_size=chunk_size,
)
estimator = estimator.fit(reference_df)
estimated_performance = estimator.estimate(analysis_df)
figure = estimated_performance.plot()
figure.show()
features = ['AGEP', 'SCHL', 'MAR', 'RELP', 'DIS', 'ESP', 'CIT', 'MIG', 'MIL', 'ANC',
'NATIVITY', 'DEAR', 'DEYE', 'DREM', 'SEX', 'RAC1P']
univariate_calculator = nml.UnivariateDriftCalculator(
column_names=features,
chunk_size=chunk_size
)
univariate_calculator.fit(reference_df)
univariate_drift = univariate_calculator.calculate(analysis_df)
alert_count_ranker = nml.AlertCountRanker()
alert_count_ranked_features = alert_count_ranker.rank(univariate_drift)
display(alert_count_ranked_features.head())
figure = univariate_drift.filter(column_names=['RELP','AGEP', 'SCHL']).plot()
figure.show()
uni_drift_AGEP_analysis = univariate_drift.filter(column_names=['AGEP'], period='analysis')
figure = estimated_performance.compare(uni_drift_AGEP_analysis).plot()
figure.show()
figure = univariate_drift.filter(period='analysis', column_names=['RELP','AGEP', 'SCHL']).plot(kind='distribution')
figure.show()
_, _, analysis_targets_df = nml.load_us_census_ma_employment_data()
analysis_with_targets_df = pd.concat([analysis_df, analysis_targets_df], axis=1)
display(analysis_with_targets_df.head())
performance_calculator = nml.PerformanceCalculator(
problem_type='classification_binary',
y_pred_proba='predicted_probability',
y_pred='prediction',
y_true='employed',
metrics=['roc_auc'],
chunk_size=chunk_size)
performance_calculator.fit(reference_df)
calculated_performance = performance_calculator.calculate(analysis_with_targets_df)
figure = estimated_performance.filter(period='analysis').compare(calculated_performance).plot()
figure.show()
📖 Documentation
- Performance monitoring
- Drift detection
🦸 Contributing and Community
We want to build NannyML together with the community! The easiest to contribute at the moment is to propose new features or log bugs under issues. For more information, have a look at how to contribute.
Thanks to all of our contributors!
🙋 Get help
The best place to ask for help is in the community slack. Feel free to join and ask questions or raise issues. Someone will definitely respond to you.
🥷 Stay updated
If you want to stay up to date with recent changes to the NannyML library, you can subscribe to our release notes. For thoughts on post-deployment data science from the NannyML team, feel free to visit our blog. You can also sing up for our newsletter, which brings together the best papers, articles, news, and open-source libraries highlighting the ML challenges after deployment.
📍 Roadmap
Curious what we are working on next? Have a look at our roadmap. If you have any questions or if you would like to see things prioritised in a different way, let us know!
📝 Citing NannyML
To cite NannyML in academic papers, please use the following BibTeX entry.
Version 0.12.0
@misc{nannyml,
title = {{N}anny{ML} (release 0.12.0)},
howpublished = {\url{https://github.com/NannyML/nannyml}},
month = mar,
year = 2023,
note = {NannyML, Belgium, OHL.},
key = {NannyML}
}
📄 License
NannyML is distributed under an Apache License Version 2.0. A complete version can be found here. All contributions will be distributed under this license.