Welcome to MLLM-SHAP
MLLM-SHAP is a Python package designed to interpret the predictions of large language models (LLMs) using SHAP (SHapley Additive exPlanations) values.
It helps you understand the contribution of input features to model outputs, enabling transparent and explainable AI workflows.
✨ Key Features
- Integration with audio and text models, supporting multi-modal inputs and outputs.
- Flexible aggregation strategies: mean, sum, max, min, etc.
- Multiple similarity metrics (cosine, euclidean, etc.) for embedding analysis.
- Customizable SHAP calculation algorithms: exact, Monte Carlo approximations, and more.
- Examples showcasing common explainability pipelines in
examples/ on the official GitHub repository.
📊 Visualization & Examples
If you’re interested in GUI visualization of SHAP values, check out the section Extension - GUI Visualization in the docs.
For more advanced CLI usages, refer to:
🤖 Supported LLM Integrations