Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

onnx-tool

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

onnx-tool

A tool for ONNX model: A parser, editor and profiler tool for ONNX models.

  • 0.9.0
  • PyPI
  • Socket score

Maintainers
1

简体中文

onnx-tool

A tool for ONNX model:

Supported Models:

  • NLP: BERT, T5, GPT, LLaMa, MPT(TransformerModel)
  • Diffusion: Stable Diffusion(TextEncoder, VAE, UNET)
  • CV: BEVFormer, MobileNet, YOLO, ...
  • Audio: sovits, LPCNet

Basic Parse and Edit

You can load any onnx file by onnx_tool.Model:
Change graph structure with onnx_tool.Graph;
Change op attributes and IO tensors with onnx_tool.Node;
Change tensor data or type with onnx_tool.Tensor.
To apply your changes, just call save_model method of onnx_tool.Model or onnx_tool.Graph.

Please refer benchmark/examples.py.


Shape Inference & Profile Model

All profiling data must be built on shape inference result.
ONNX graph with tensor shapes:

Regular model profiling table:



Sparse profiling table:



Introduction: data/Profile.md.
pytorch usage: data/PytorchUsage.md.
tensorflow usage: data/TensorflowUsage.md.
examples: benchmark/examples.py.


Compute Graph with Shape Engine

From a raw graph to a compute graph:

Remove shape calculation layers(created by ONNX export) to get a Compute Graph. Use Shape Engine to update tensor shapes at runtime.
Examples: benchmark/shape_regress.py. benchmark/examples.py.
Integrate Compute Graph and Shape Engine into a cpp inference engine: data/inference_engine.md


Memory Compression

Activation Compression

Activation memory also called temporary memory is created by each OP's output. Only the last activation marked as the model's output will be kept. So you don't have to prepare memory space for each activation tensor. They better reuse an optimized memory size.

For large language models and high-resolution CV models, the activation memory compression is a key to save memory.
The compression method achieves 5% memory compression on most models.
For example:

modelNative Memory Size(MB)Compressed Memory Size(MB)Compression Ratio(%)
StableDiffusion(VAE_encoder)14,2455403.7
StableDiffusion(VAE_decoder)25,4171,1404.48
StableDiffusion(Text_encoder)21552.5
StableDiffusion(UNet)36,1352,2326.2
GPT24026.9
BERT2,170271.25

code example: benchmark/compression.py

Weight Compression

A fp32 model with 7B parameters will take 28GB disk space and memory space. You can not even run the model if your device doesn't have that much memory space. So weight compression is critical to run large language models. As a reference, 7B model with int4 symmetric per block(32) quantization(llama.cpp's q4_0 quantization method) only has ~0.156x model size compared with fp32 model.

Current support:

  • [fp16]
  • [int8]x[symmetric/asymmetric]x[per tensor/per channel/per block]
  • [int4]x[symmetric/asymmetric]x[per tensor/per channel/per block]

code examples:benchmark/examples.py.


How to install

pip install onnx-tool

OR

pip install --upgrade git+https://github.com/ThanatosShinji/onnx-tool.git

python>=3.6

If pip install onnx-tool failed by onnx's installation, you may try pip install onnx==1.8.1 (a lower version like this) first.
Then pip install onnx-tool again.


Known Issues

  • Loop op is not supported
  • Sequence type is not supported

Results of ONNX Model Zoo and SOTA models

Some models have dynamic input shapes. The MACs varies from input shapes. The input shapes used in these results are writen to data/public/config.py. These onnx models with all tensors' shape can be downloaded: baidu drive(code: p91k) google drive

ModelParams(M)MACs(M)
GPT-J 1 layer464173,398
MPT 1 layer26179,894
text_encoder123.136,782
UNet2DCondition859.52888,870
VAE_encoder34.16566,371
VAE_decoder49.491,271,959
SqueezeNet 1.01.23351
AlexNet60.96665
GoogleNet6.991,606
googlenet_age5.981,605
LResNet100E-IR65.2212,102
BERT-Squad113.6122,767
BiDAF18.089.87
EfficientNet-Lite412.961,361
Emotion12.95877
Mask R-CNN46.7792,077
ModelParams(M)MACs(M)
LLaMa 1 layer618211,801
BEVFormer Tiny33.7210,838
rvm_mobilenetv33.734,289
yolov464.333,319
ConvNeXt-L229.7934,872
edgenext_small5.581,357
SSD19.98216,598
RealESRGAN16.6973,551
ShuffleNet2.29146
GPT-2137.021,103
T5-encoder109.62686
T5-decoder162.621,113
RoBERTa-BASE124.64688
Faster R-CNN44.1046,018
FCN ResNet-5035.2937,056
ResNet50253,868

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc