Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
onnxruntime-openvino
Advanced tools
OpenVINO™ Execution Provider for ONNX Runtime <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html>
_ is a product designed for ONNX Runtime developers who want to get started with OpenVINO™ in their inferencing applications. This product delivers OpenVINO™ <https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html>
_ inline optimizations which enhance inferencing performance with minimal code modifications.
OpenVINO™ Execution Provider for ONNX Runtime accelerates inference across many AI models <https://github.com/onnx/models>
_ on a variety of Intel® hardware such as:
Requirements ^^^^^^^^^^^^
This package supports:
pip3 install onnxruntime-openvino
Please install OpenVINO™ PyPi Package separately for Windows.
For installation instructions on Windows please refer to OpenVINO™ Execution Provider for ONNX Runtime for Windows <https://github.com/intel/onnxruntime/releases/>
_.
OpenVINO™ Execution Provider for ONNX Runtime Linux Wheels comes with pre-built libraries of OpenVINO™ version 2024.1.0 eliminating the need to install OpenVINO™ separately.
For more details on build and installation please refer to Build <https://onnxruntime.ai/docs/build/eps.html#openvino>
_.
Usage ^^^^^
By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only).
Invoke the provider config device type argument <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options>
_ to change the hardware on which inferencing is done.
For more API calls and environment variables, see Usage <https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options>
_.
Samples ^^^^^^^^
To see what you can do with OpenVINO™ Execution Provider for ONNX Runtime, explore the demos located in the Examples <https://github.com/microsoft/onnxruntime-inference-examples/tree/main/python/OpenVINO_EP>
_.
License ^^^^^^^^
OpenVINO™ Execution Provider for ONNX Runtime is licensed under MIT <https://github.com/microsoft/onnxruntime/blob/main/LICENSE>
_.
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.
Support ^^^^^^^^
Please submit your questions, feature requests and bug reports via GitHub Issues <https://github.com/microsoft/onnxruntime/issues>
_.
How to Contribute ^^^^^^^^^^^^^^^^^^
We welcome community contributions to OpenVINO™ Execution Provider for ONNX Runtime. If you have an idea for improvement:
GitHub Issues <https://github.com/microsoft/onnxruntime/issues>
_.Pull Request <https://github.com/microsoft/onnxruntime/pulls>
_.FAQs
ONNX Runtime is a runtime accelerator for Machine Learning models
We found that onnxruntime-openvino demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.