![Create React App Officially Deprecated Amid React 19 Compatibility Issues](https://cdn.sanity.io/images/cgdhsj6q/production/04fa08cf844d798abc0e1a6391c129363cc7e2ab-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Create React App Officially Deprecated Amid React 19 Compatibility Issues
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Architecture | Workflow | LLMs Recipes | Results | Documentations
Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, and ONNX Runtime, as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch. In particular, the tool provides the key features, typical examples, and open collaborations as below:
Support a wide range of Intel hardware such as Intel Gaudi Al Accelerators, Intel Core Ultra Processors, Intel Xeon Scalable Processors, Intel Xeon CPU Max Series, Intel Data Center GPU Flex Series, and Intel Data Center GPU Max Series with extensive testing; support AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing; support NVidia GPU for some WOQ algorithms like AutoRound and HQQ.
Validate popular LLMs such as LLama2, Falcon, GPT-J, Bloom, OPT, and more than 10,000 broad models such as Stable Diffusion, BERT-Large, and ResNet50 from popular model hubs such as Hugging Face, Torch Vision, and ONNX Model Zoo, with automatic accuracy-driven quantization strategies
Collaborate with cloud marketplaces such as Google Cloud Platform, Amazon Web Services, and Azure, software platforms such as Alibaba Cloud, Tencent TACO and Microsoft Olive, and open AI ecosystem such as Hugging Face, PyTorch, ONNX, ONNX Runtime, and Lightning AI
pip install torch --index-url https://download.pytorch.org/whl/cpu
Note: There is a version mapping between Intel Neural Compressor and Gaudi Software Stack, please refer to this table and make sure to use a matched combination.
https://intel.github.io/intel-extension-for-pytorch/index.html#installation
https://pytorch.org/get-started/locally
pip install tensorflow
# Install 2.X API + Framework extension API + PyTorch dependency
pip install neural-compressor[pt]
# Install 2.X API + Framework extension API + TensorFlow dependency
pip install neural-compressor[tf]
Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.
Setting up the environment:
pip install "neural-compressor>=2.3" "transformers>=4.34.0" torch torchvision
After successfully installing these packages, try your first quantization program.
Following example code demonstrates FP8 Quantization, it is supported by Intel Gaudi2 AI Accelerator.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.
# Run a container with an interactive shell
docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.19.0/ubuntu24.04/habanalabs/pytorch-installer-2.5.1:latest
Run the example:
from neural_compressor.torch.quantization import (
FP8Config,
prepare,
convert,
)
import torch
import torchvision.models as models
model = models.resnet18()
qconfig = FP8Config(fp8_config="E4M3")
model = prepare(model, qconfig)
# Customer defined calibration. Below is a dummy calibration
model(torch.randn(1, 3, 224, 224).to("hpu"))
model = convert(model)
output = model(torch.randn(1, 3, 224, 224).to("hpu")).to("cpu")
print(output.shape)
Following example code demonstrates weight-only large language model loading on Intel Gaudi2 AI Accelerator.
from neural_compressor.torch.quantization import load
model_name = "TheBloke/Llama-2-7B-GPTQ"
model = load(
model_name_or_path=model_name,
format="huggingface",
device="hpu",
torch_dtype=torch.bfloat16,
)
Note:
Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.
Overview | ||||
---|---|---|---|---|
Architecture | Workflow | APIs | LLMs Recipes | Examples |
PyTorch Extension APIs | ||||
Overview | Dynamic Quantization | Static Quantization | Smooth Quantization | |
Weight-Only Quantization | FP8 Quantization | MX Quantization | Mixed Precision | |
Tensorflow Extension APIs | ||||
Overview | Static Quantization | Smooth Quantization | ||
Transformers-like APIs | ||||
Overview | ||||
Other Modules | ||||
Auto Tune | Benchmark |
Note: From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in 2.X API currently.
Note: View Full Publication List.
FAQs
Repository of Intel® Neural Compressor
We found that neural-compressor demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.