Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

holisticai

Package Overview
Dependencies
Maintainers
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

holisticai

  • 1.0.10
  • Source
  • PyPI
  • Socket score

Maintainers
4


Holistic AI: building trustworthy AI systems

PyPI Documentation Status PyPI - License PyPI - Downloads Slack


Holistic AI is an open-source library dedicated to assessing and improving the trustworthiness of AI systems. We believe that responsible AI development requires a comprehensive evaluation across multiple dimensions, beyond just accuracy.

Current Capabilities


Holistic AI currently focuses on five verticals of AI trustworthiness:

  1. Bias: measure and mitigate bias in AI models.
  2. Explainability: measure into model behavior and decision-making.
  3. Robustness: measure model performance under various conditions.
  4. Security: measure the privacy risks associated with AI models.
  5. Efficacy: measure the effectiveness of AI models.

Quick Start


pip install holisticai  # Basic installation
pip install holisticai[bias]  # Bias mitigation support
pip install holisticai[explainability]  # For explainability metrics and plots
pip install holisticai[all]  # Install all packages for bias and explainability
# imports
from holisticai.bias.metrics import classification_bias_metrics
from holisticai.datasets import load_dataset
from holisticai.bias.plots import bias_metrics_report
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler

# load an example dataset and split
dataset = load_dataset('law_school', protected_attribute="race")
dataset_split = dataset.train_test_split(test_size=0.3)

# separate the data into train and test sets
train_data = dataset_split['train']
test_data = dataset_split['test']

# rescale the data
scaler = StandardScaler()
X_train_t = scaler.fit_transform(train_data['X'])
X_test_t = scaler.transform(test_data['X'])

# train a logistic regression model
model = LogisticRegression(random_state=42, max_iter=500)
model.fit(X_train_t, train_data['y'])

# make predictions
y_pred = model.predict(X_test_t)

# compute bias metrics
metrics = classification_bias_metrics(
    group_a = test_data['group_a'],
    group_b = test_data['group_b'],
    y_true = test_data['y'],
    y_pred = y_pred
    )

# create a comprehensive report
bias_metrics_report(model_type='binary_classification', table_metrics=metrics)

Key Features


  • Comprehensive Metrics: Measure various aspects of AI system trustworthiness, including bias, fairness, and explainability.
  • Mitigation Techniques: Implement strategies to address identified issues and improve the fairness and robustness of AI models.
  • User-Friendly Interface: Intuitive API for easy integration into existing workflows.
  • Visualization Tools: Generate insightful visualizations for better understanding of model behavior and bias patterns.

Documentation and Tutorials


Detailed Installation


Troubleshooting (macOS):

Before installing the library, you may need to install these packages:

brew install cbc pkg-config
python -m pip install cylp
brew install cmake

Contributing

We welcome contributions from the community To learn more about contributing to Holistic AI, please refer to our Contributing Guide.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc