UNcerta
Inty
QUantification b
Enchmark: a Python library for benchmarking uncertainty estimation and quantification methods for Machine Learning models predictions.
Introduction
UNIQUE
provides methods for quantifying and evaluating the uncertainty of Machine Learning (ML) models predictions. The library allows to combine and benchmark multiple uncertainty quantification (UQ) methods simultaneously, generates intuitive visualizations, evaluates the goodness of the UQ methods against established metrics, and in general enables the users to get a comprehensive overview of their ML model's performances from an uncertainty quantification perspective.
UNIQUE
is a model-agnostic tool, meaning that it does not depend on any specific ML model building platform or provides any ML model training functionality. It is lightweight, because it only requires the user to input their model's inputs and predictions.
High-level schema of
UNIQUE
's components.
Installation
UNIQUE
is currently compatible with Python 3.8 through 3.12.1. To install the latest release and use the package as is, run the following in a compatible environment of choice:
pip install unique-uncertainty
or:
conda install -c conda-forge unique-uncertainty
Check out the docs for more installation instructions.
Getting Started
Check out the docs for a complete set of instructions on how to prepare your data and the possible configurations offered by UNIQUE
.
Usage
Finally, once the data and configuration files have been prepared, you can run UNIQUE
in the following way:
from unique import Pipeline
pipeline = Pipeline.from_config("/path/to/config.yaml")
uq_methods_outputs, uq_evaluation_outputs = pipeline.fit()
Fitting the Pipeline
will return two dictionaries:
uq_methods_outputs
: contains each UQ method's name (as in "UQ_Method_Name[Input_Name(s)]") and computed UQ values.uq_evaluation_outputs
: contains, for each evaluation type (ranking-based, proper scoring rules, and calibration-based), the evaluation metrics outputs for all the corresponding UQ methods organized in pd.DataFrame
.
Additionally, UNIQUE
also generates graphical outputs in the form of tables and evaluation plots (if display_outputs
is enabled and the code is running in a JupyterNotebook cell).
Examples
For more hands-on examples and detailed usage, check out some of the examples in the docs.
Deep Dive
Check out the docs for an in-depth overview of UNIQUE
's concepts, functionalities, outputs, and references.
Contributing
Any and all contributions and suggestions from the community are more than welcome and highly appreciated. If you wish to help us out in making UNIQUE
even better, please check out our contributing guidelines.
Please note that we have a Code of Conduct in place to ensure a positive and inclusive community environment. By participating in this project, you agree to abide by its terms.
License
UNIQUE
is licensed under the BSD 3-Clause License. See the LICENSE file.
Cite Us
If you find UNIQUE
helpful for your work and/or research, please consider citing our work:
@misc{lanini2024unique,
title={UNIQUE: A Framework for Uncertainty Quantification Benchmarking},
author={Lanini, Jessica and Huynh, Minh Tam Davide and Scebba, Gaetano and Schneider, Nadine and Rodr{\'\i}guez-P{\'e}rez, Raquel},
year={2024},
doi={https://doi.org/10.26434/chemrxiv-2024-fmbgk},
}
Contacts & Acknowledgements
For any questions or further details about the project, please get in touch with any of the following contacts: