
Security News
How Enterprise Security Is Adapting to AI-Accelerated Threats
Socket CTO Ahmad Nassri discusses why supply chain attacks now target developer machines and what AI means for the future of enterprise security.
aimodelshare
Advanced tools
Deploy locally saved machine learning models to a live REST API and integrated dashboard.

Launch machine learning models into scalable production ready prediction REST APIs using a single Python function.
Details about each model, how to use the model's API, and the model's author(s) are deployed simultaneously into a searchable website at modelshare.org.
Deployed models receive an individual Model Playground listing information about all deployed models. Each of these pages includes a fully functional prediction dashboard that allows end-users to input text, tabular, or image data and receive live predictions.
Moreover, users can build on model playgrounds by 1) creating ML model competitions, 2) uploading Jupyter notebooks to share code, 3) sharing model architectures and 4) sharing data... with all shared artifacts automatically creating a data science user portfolio.
pip install aimodelshare
Make sure you have conda version >=4.9
You can check your conda version with:
conda --version
To update conda use:
conda update conda
Installing aimodelshare from the conda-forge channel can be achieved by adding conda-forge to your channels with:
conda config --add channels conda-forge
conda config --set channel_priority strict
Once the conda-forge channel has been enabled, aimodelshare can be installed with conda:
conda install aimodelshare
or with mamba:
mamba install aimodelshare
The Moral Compass system now supports tracking multiple performance metrics for fairness-focused AI challenges. Track accuracy, demographic parity, equal opportunity, and other fairness metrics simultaneously.
from aimodelshare.moral_compass import ChallengeManager
# Create a challenge manager
manager = ChallengeManager(
table_id="fairness-challenge-2024",
username="your_username"
)
# Track multiple metrics
manager.set_metric("accuracy", 0.85, primary=True)
manager.set_metric("demographic_parity", 0.92)
manager.set_metric("equal_opportunity", 0.88)
# Track progress
manager.set_progress(tasks_completed=3, total_tasks=5)
# Sync to leaderboard
result = manager.sync()
print(f"Moral compass score: {result['moralCompassScore']:.4f}")
moralCompassScore = primaryMetricValue × ((tasksCompleted + questionsCorrect) / (totalTasks + totalQuestions))
This combines:
See Justice & Equity Challenge Example for detailed examples including:
from aimodelshare.moral_compass import ChallengeManager
manager = ChallengeManager(table_id="my-table", username="user1")
# Set metrics
manager.set_metric("accuracy", 0.90, primary=True)
manager.set_metric("fairness", 0.95)
# Set progress
manager.set_progress(tasks_completed=4, total_tasks=5)
# Preview score locally
score = manager.get_local_score()
# Sync to server
result = manager.sync()
from aimodelshare.moral_compass import MoralcompassApiClient
client = MoralcompassApiClient()
# Update moral compass with metrics
result = client.update_moral_compass(
table_id="my-table",
username="user1",
metrics={"accuracy": 0.90, "fairness": 0.95},
primary_metric="fairness",
tasks_completed=4,
total_tasks=5
)
The Moral Compass API client requires a base URL to connect to the REST API. The URL is resolved in the following order:
In GitHub Actions workflows, the MORAL_COMPASS_API_BASE_URL environment variable is automatically exported from Terraform outputs:
- name: Initialize Terraform and get API URL
working-directory: infra
run: |
terraform init
terraform workspace select dev || terraform workspace new dev
API_URL=$(terraform output -raw api_base_url)
echo "MORAL_COMPASS_API_BASE_URL=$API_URL" >> $GITHUB_ENV
When developing locally, the API client attempts to resolve the URL in this order:
Environment variable - Set MORAL_COMPASS_API_BASE_URL or AIMODELSHARE_API_BASE_URL:
export MORAL_COMPASS_API_BASE_URL="https://api.example.com/v1"
Cached Terraform outputs - The client looks for infra/terraform_outputs.json
Terraform command - As a fallback, executes terraform output -raw api_base_url in the infra/ directory
Integration tests that require the Moral Compass API will skip gracefully if the URL cannot be resolved, rather than failing. This allows the test suite to run in environments where the infrastructure is not available (e.g., forks without access to AWS resources).
During testing, aimodelshare creates AWS resources including API Gateway REST APIs (playgrounds) and IAM users. To manage and clean up these resources:
Use the interactive cleanup script to identify and delete test resources:
# Preview resources without deleting (safe)
python scripts/cleanup_test_resources.py --dry-run
# Interactive cleanup
python scripts/cleanup_test_resources.py
# Cleanup in a specific region
python scripts/cleanup_test_resources.py --region us-west-2
The script will:
temporaryaccessAImodelshare)You can also trigger the cleanup workflow from the GitHub Actions tab:
For complete documentation, see CLEANUP_RESOURCES.md.
FAQs
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Socket CTO Ahmad Nassri discusses why supply chain attacks now target developer machines and what AI means for the future of enterprise security.

Security News
Learn the essential steps every developer should take to stay secure on npm and reduce exposure to supply chain attacks.

Security News
Experts push back on new claims about AI-driven ransomware, warning that hype and sponsored research are distorting how the threat is understood.