
Research
/Security News
Weaponizing Discord for Command and Control Across npm, PyPI, and RubyGems.org
Socket researchers uncover how threat actors weaponize Discord across the npm, PyPI, and RubyGems ecosystems to exfiltrate sensitive data.
ai-fairness-toolkit
Advanced tools
A comprehensive toolkit for evaluating and improving AI model fairness and explainability
The AI Fairness and Explainability Toolkit is an open-source platform designed to evaluate, visualize, and improve AI models with a focus on fairness, explainability, and ethical considerations. Unlike traditional benchmarking tools that focus primarily on performance metrics, this toolkit helps developers understand and mitigate bias, explain model decisions, and ensure ethical AI deployment.
To democratize ethical AI development by providing tools that make fairness and explainability accessible to all developers, regardless of their expertise in ethics or advanced ML techniques.
# Install from PyPI
pip install ai-fairness-toolkit
# Or install from source
pip install git+https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
from ai_fairness_toolkit import FairnessAnalyzer, BiasMitigator, ModelExplainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import fetch_openml
import pandas as pd
# Load sample data
data = fetch_openml(data_id=1590, as_frame=True)
X, y = data.data, data.target
# Initialize analyzer
analyzer = FairnessAnalyzer(sensitive_features=X['sex'])
# Train a model
model = RandomForestClassifier()
model.fit(X, y)
# Evaluate fairness
results = analyzer.evaluate(model, X, y)
print(results.fairness_metrics)
# Generate interactive report
analyzer.visualize().show()
ai-fairness-toolkit/
├── ai_fairness_toolkit/ # Main package
│ ├── core/ # Core functionality
│ │ ├── metrics/ # Fairness and performance metrics
│ │ ├── bias_mitigation/ # Bias mitigation techniques
│ │ ├── explainers/ # Model explainability tools
│ │ └── visualization/ # Visualization components
│ ├── examples/ # Example notebooks
│ └── utils/ # Utility functions
├── tests/ # Test suite
├── docs/ # Documentation
├── examples/ # Example scripts
└── scripts/ # Utility scripts
For detailed documentation, please visit ai-fairness-toolkit.readthedocs.io.
We welcome contributions from the community! Here's how you can help:
efb3c82aa74411c60ac4c0c280c3bc35156e58fc
# Clone the repository
git clone https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
cd AI-Fairness-Explainability-Toolkit
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -e .[dev]
# Run tests
pytest
We use Black for code formatting and flake8 for linting. Please ensure your code passes both before submitting a PR.
# Auto-format code
black .
# Run linter
flake8
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or feedback, please open an issue or contact taimoorkhaniajaznabi2@gmail.com
efb3c82aa74411c60ac4c0c280c3bc35156e58fc
This project follows the all-contributors specification. Contributions of any kind welcome!
MIT License
AFET is currently in development. We're looking for contributors and early adopters to help shape the future of ethical AI evaluation!
FAQs
A comprehensive toolkit for evaluating and improving AI model fairness and explainability
We found that ai-fairness-toolkit demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
Socket researchers uncover how threat actors weaponize Discord across the npm, PyPI, and RubyGems ecosystems to exfiltrate sensitive data.
Security News
Socket now integrates with Bun 1.3’s Security Scanner API to block risky packages at install time and enforce your organization’s policies in local dev and CI.
Research
The Socket Threat Research Team is tracking weekly intrusions into the npm registry that follow a repeatable adversarial playbook used by North Korean state-sponsored actors.