
Company News
Socket Has Acquired Secure Annex
Socket has acquired Secure Annex to expand extension security across browsers, IDEs, and AI tools.
optimum
Advanced tools
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.
🤗 Optimum
Optimum is an extension of Transformers 🤖 Diffusers 🧨 TIMM 🖼️ and Sentence-Transformers 🤗, providing a set of optimization tools and enabling maximum efficiency to train and run models on targeted hardware, while keeping things easy to use.
Optimum can be installed using pip as follows:
python -m pip install optimum
If you'd like to use the accelerator-specific features of Optimum, you can check the documentation and install the required dependencies according to the table below:
| Accelerator | Installation |
|---|---|
| ONNX | pip install --upgrade --upgrade-strategy eager optimum[onnx] |
| ONNX Runtime | pip install --upgrade --upgrade-strategy eager optimum[onnxruntime] |
| ONNX Runtime GPU | pip install --upgrade --upgrade-strategy eager optimum[onnxruntime-gpu] |
| Intel Neural Compressor | pip install --upgrade --upgrade-strategy eager optimum[neural-compressor] |
| OpenVINO | pip install --upgrade --upgrade-strategy eager optimum[openvino] |
| IPEX | pip install --upgrade --upgrade-strategy eager optimum[ipex] |
| NVIDIA TensorRT-LLM | docker run -it --gpus all --ipc host huggingface/optimum-nvidia |
| AMD Instinct GPUs and Ryzen AI NPU | pip install --upgrade --upgrade-strategy eager optimum[amd] |
| AWS Trainum & Inferentia | pip install --upgrade --upgrade-strategy eager optimum[neuronx] |
| Intel Gaudi Accelerators (HPU) | pip install --upgrade --upgrade-strategy eager optimum[habana] |
| FuriosaAI | pip install --upgrade --upgrade-strategy eager optimum[furiosa] |
The --upgrade --upgrade-strategy eager option is needed to ensure the different packages are upgraded to the latest possible version.
To install from source:
python -m pip install git+https://github.com/huggingface/optimum.git
For the accelerator-specific features, append optimum[accelerator_type] to the above command:
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git
Optimum provides multiple tools to export and run optimized models on various ecosystems:
The export and optimizations can be done both programmatically and with a command line.
🚨🚨🚨 ONNX integration was moved to optimum-onnx so make sure to follow the installation instructions 🚨🚨🚨
Before you begin, make sure you have all the necessary libraries installed :
pip install --upgrade --upgrade-strategy eager optimum[onnx]
It is possible to export Transformers, Diffusers, Sentence Transformers and Timm models to the ONNX format and perform graph optimization as well as quantization easily.
For more information on the ONNX export, please check the documentation.
Once the model is exported to the ONNX format, we provide Python classes enabling you to run the exported ONNX model in a seamless manner using ONNX Runtime in the backend.
For this make sure you have ONNX Runtime installed, fore more information check out the installation instructions.
More details on how to run ONNX models with ORTModelForXXX classes here.
Before you begin, make sure you have all the necessary libraries installed.
You can find more information on the different integration in our documentation and in the examples of optimum-intel.
Before you begin, make sure you have all the necessary libraries installed :
pip install optimum-executorch@git+https://github.com/huggingface/optimum-executorch.git
Users can export Transformers models to ExecuTorch and run inference on edge devices within PyTorch's ecosystem.
For more information about export Transformers to ExecuTorch, please check the doc for Optimum-ExecuTorch.
Quanto is a pytorch quantization backend which allows you to quantize a model either using the python API or the optimum-cli.
You can see more details and examples in the Quanto repository.
Optimum provides wrappers around the original Transformers Trainer to enable training on powerful hardware easily. We support many providers:
Before you begin, make sure you have all the necessary libraries installed :
pip install --upgrade --upgrade-strategy eager optimum[habana]
You can find examples in the documentation and in the examples.
Before you begin, make sure you have all the necessary libraries installed :
pip install --upgrade --upgrade-strategy eager optimum[neuronx]
You can find examples in the documentation and in the tutorials.
FAQs
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.
We found that optimum demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 7 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket has acquired Secure Annex to expand extension security across browsers, IDEs, and AI tools.

Research
/Security News
Socket is tracking cloned Open VSX extensions tied to GlassWorm, with several updated from benign-looking sleepers into malware delivery vehicles.

Product
Reachability analysis for PHP is now available in experimental, helping teams identify which vulnerabilities are actually exploitable.