Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
adversarial-insight-ml
Advanced tools
“Why does your machine lie?”
Adversarial Insight ML (AIML) is a python package that evaluates the robustness of image classification models against adversarial attacks. AIML provides the functionality to automatically test your models against generated adversarial examples and outputs precise, insightful and robust feedback based on the several attack methods we have carefully chosen. Furthermore, AIML aims to be straightforward and beginner-friendly to allow non-technical users to take full advantage of its functionalities.
For more information, you can also visit the PyPI page and the documentation page.
To install Adversarial Insight ML, you can use pip:
pip install adversarial-insight-ml
Here's a simple overview of the usage of our package:
You can evaluate your model with the evaluate()
function:
from aiml.evaluation.evaluate import evaluate
evaluate(model, test_dataset)
The evaluate()
function has two required parameters:
input_model (str or model)
: A string of the name of the machine learning model or the machine learning model itself.input_test_data (str or dataset)
: A string of the name of the testing dataset or the testing dataset itself.The evaluate()
function has the following optional parameters:
input_train_data (str or dataset, optional)
: A string of the name of the training dataset or the training dataset itself (default is None).input_shape (tuple, optional)
: Shape of input data (default is None).clip_values (tuple, optional)
: Range of input data values (default is None).nb_classes (int, optional)
: Number of classes in the dataset (default is None).batch_size_attack (int, optional)
: Batch size for attack testing (default is 64).num_threads_attack (int, optional)
: Number of threads for attack testing (default is 0).batch_size_train (int, optional)
: Batch size for training data (default is 64).batch_size_test (int, optional)
: Batch size for test data (default is 64).num_workers (int, optional)
: Number of workers to use for data loading (default is half of the available CPU cores).dry (bool, optional)
: When True, the code should only test one example.attack_para_list (list, optional)
: List of parameter combinations for the attack.See the demos in examples/
directory for usage in action:
After evaluating your model with evaluate()
function, we provide
the following insights:
attack_evaluation_result.txt
followed by date. For example:
img/
followed by date, for example:
Code Style
Always adhere to the PEP 8 style guide for writing Python code, allowing upto 99 characters per line as the absolute maximum. Alternatively, just use black.
Commit Messages
When making changes to the codebase, please refer to the Documentation/SubmittingPatches in the Git repo:
Branching
We conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:
This project is licensed under the MIT License - see the LICENSE file for details.
We extend our sincere appreciation to the following individuals who have been instrumental in the success of this project:
Firstly, our client Mr. Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.
We'd also like to express our gratitude to Dr. Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.
Thank you for being part of this journey.
Warm regards, Team 7
Sungjae Jang sjan260@aucklanduni.ac.nz
Takuya Saegusa tsae032@aucklanduni.ac.nz
Haozhe Wei hwei313@aucklanduni.ac.nz
Yuming Zhou yzho739@aucklanduni.ac.nz
Terence Zhang tzha820@aucklanduni.ac.nz
FAQs
Unknown package
We found that adversarial-insight-ml demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.