Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
AI Benchmark is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs.
AI Benchmark Alpha is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs. The benchmark is relying on TensorFlow machine learning library, and is providing a lightweight and accurate solution for assessing inference and training speed for key Deep Learning models.
In total, AI Benchmark consists of 42 tests and 19 sections provided below:
[classification]
[classification]
[classification]
[classification]
[classification]
[classification]
[classification]
[image-to-image mapping]
[image-to-image mapping]
[image-to-image mapping]
[image-to-image mapping]
[image-to-image mapping]
[image-to-image mapping]
[image segmentation]
[image segmentation]
[image segmentation]
[inpainting]
[sentence sentiment analysis]
[text translation]
For more information and results, please visit the project website: http://ai-benchmark.com/alpha
The benchmark requires TensorFlow machine learning library to be present in your system.
On systems that do not have Nvidia GPUs, run the following commands to install AI Benchmark:
pip install tensorflow
pip install ai-benchmark
If you want to check the performance of Nvidia graphic cards, run the following commands:
pip install tensorflow-gpu
pip install ai-benchmark
Note 1:
If Tensorflow is already installed in your system, you can skip the first command.
Note 2:
For running the benchmark on Nvidia GPUs, NVIDIA CUDA
and cuDNN
libraries should be installed first. Please find detailed instructions here.
To run AI Benchmark, use the following code:
from ai_benchmark import AIBenchmark
benchmark = AIBenchmark()
results = benchmark.run()
Alternatively, on Linux systems you can type ai-benchmark
in the command line to start the tests.
To run inference or training only, use benchmark.run_inference()
or benchmark.run_training()
.
AIBenchmark(use_CPU=None, verbose_level=1):
use_CPU=
{True, False, None}
: whether to run the tests on CPUs (if tensorflow-gpu is installed)
verbose_level=
{0, 1, 2, 3}
: run tests silently | with short summary | with information about each run | with TF logs
benchmark.run(precision="normal"):
precision=
{"normal", "high"}
: ifhigh
is selected, the benchmark will execute 10 times more runs for each test.
GPU with at least 2GB of RAM is required for running inference tests / 4GB of RAM for training tests.
The benchmark is compatible with both TensorFlow 1.x
and 2.x
versions.
Please contact andrey@vision.ee.ethz.ch
for any feedback or information.
FAQs
AI Benchmark is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs.
We found that ai-benchmark demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.