Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

adversarial-insight-ml

Package Overview
Dependencies
Maintainers
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

adversarial-insight-ml

  • 0.2.2
  • PyPI
  • Socket score

Maintainers
2

AIML Logo

Adversarial Insight ML (AIML)

PyPI version Python version License Code style Documentation

“Why does your machine lie?”

Adversarial Insight ML (AIML) is a python package that evaluates the robustness of image classification models against adversarial attacks. AIML provides the functionality to automatically test your models against generated adversarial examples and outputs precise, insightful and robust feedback based on the several attack methods we have carefully chosen. Furthermore, AIML aims to be straightforward and beginner-friendly to allow non-technical users to take full advantage of its functionalities.

For more information, you can also visit the PyPI page and the documentation page.

Table of Contents

Installation

To install Adversarial Insight ML, you can use pip:

pip install adversarial-insight-ml

Usage

Here's a simple overview of the usage of our package:

img overview

You can evaluate your model with the evaluate() function:

from aiml.evaluation.evaluate import evaluate

evaluate(model, test_dataset)

The evaluate() function has two required parameters:

  • input_model (str or model): A string of the name of the machine learning model or the machine learning model itself.
  • input_test_data (str or dataset): A string of the name of the testing dataset or the testing dataset itself.

The evaluate() function has the following optional parameters:

  • input_train_data (str or dataset, optional): A string of the name of the training dataset or the training dataset itself (default is None).
  • input_shape (tuple, optional): Shape of input data (default is None).
  • clip_values (tuple, optional): Range of input data values (default is None).
  • nb_classes (int, optional): Number of classes in the dataset (default is None).
  • batch_size_attack (int, optional): Batch size for attack testing (default is 64).
  • num_threads_attack (int, optional): Number of threads for attack testing (default is 0).
  • batch_size_train (int, optional): Batch size for training data (default is 64).
  • batch_size_test (int, optional): Batch size for test data (default is 64).
  • num_workers (int, optional): Number of workers to use for data loading (default is half of the available CPU cores).
  • dry (bool, optional): When True, the code should only test one example.
  • attack_para_list (list, optional): List of parameter combinations for the attack.

See the demos in examples/ directory for usage in action:

Features

After evaluating your model with evaluate() function, we provide the following insights:

  • Summary of adversarial attacks performed, found in a text file named attack_evaluation_result.txt followed by date. For example: Result Example
  • Samples of the images can be found in a directory img/ followed by date, for example:

    img overview sample image

Contributing

Code Style
Always adhere to the PEP 8 style guide for writing Python code, allowing upto 99 characters per line as the absolute maximum. Alternatively, just use black.

Commit Messages
When making changes to the codebase, please refer to the Documentation/SubmittingPatches in the Git repo:

  • Write commit messages in present tense and imperative mood, e.g., "Add feature" instead of "Added feature" or "Adding feature."
  • Craft your messages as if you're giving orders to the codebase to change its behaviour.

Branching
We conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:

  • main: This branch is always deployable and reflects the production state.
  • bugfix/*: For bug fixes.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

We extend our sincere appreciation to the following individuals who have been instrumental in the success of this project:

Firstly, our client Mr. Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.

We'd also like to express our gratitude to Dr. Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.

Thank you for being part of this journey.

Warm regards, Team 7

Contacts

Sungjae Jang sjan260@aucklanduni.ac.nz
Takuya Saegusa tsae032@aucklanduni.ac.nz
Haozhe Wei hwei313@aucklanduni.ac.nz
Yuming Zhou yzho739@aucklanduni.ac.nz
Terence Zhang tzha820@aucklanduni.ac.nz

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc