Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

keras2onnx

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

keras2onnx

Converts Machine Learning models to ONNX for use in Windows ML

  • 1.7.0
  • PyPI
  • Socket score

Maintainers
3

Introduction

The keras2onnx model converter enables users to convert Keras models into the ONNX model format. Initially, the Keras converter was developed in the project onnxmltools. keras2onnx converter development was moved into an independent repository to support more kinds of Keras models and reduce the complexity of mixing multiple converters.

Most of the common Keras layers have been supported for conversion. Please refer to the Keras documentation or tf.keras docs for details on Keras layers.

Windows Machine Learning (WinML) users can use WinMLTools which wrap its call on keras2onnx to convert the Keras models. If you want to use the keras2onnx converter, please refer to the WinML Release Notes to identify the corresponding ONNX opset number for your WinML version.

keras2onnx has been tested on Python 3.5, 3.6, and 3.7, with tensorflow 1.x/2.0/2.1 (CI build). It does not support Python 2.x.

Install

You can install latest release of Keras2ONNX from PyPi: Due to some reason, the package release paused, please install it from the source, and the support of keras or tf.keras over tensorflow 2.x is only available in the source.

pip install keras2onnx

or install from source:

pip install -U git+https://github.com/microsoft/onnxconverter-common
pip install -U git+https://github.com/onnx/keras-onnx

Before running the converter, please notice that tensorflow has to be installed in your python environment, you can choose tensorflow/tensorflow-cpu package(CPU version) or tensorflow-gpu(GPU version)

Notes

Keras2ONNX supports the new Keras subclassing model which was introduced in tensorflow 2.0 since the version 1.6.5. Some typical subclassing models like huggingface/transformers have been converted into ONNX and validated by ONNXRuntime.

Since its version 2.3, the multi-backend Keras (keras.io) stops the support of the tensorflow version above 2.0. The auther suggests to switch to tf.keras for the new features.

Multi-backend Keras and tf.keras:

Both Keras model types are now supported in the keras2onnx converter. If in the user python env, Keras package was installed from Keras.io and tensorflow package version is 1.x, the converter converts the model as it was created by the keras.io package. Otherwise, it will convert it through tf.keras.

If you want to override this behaviour, please specify the environment variable TF_KERAS=1 before invoking the converter python API.

Development

Keras2ONNX depends on onnxconverter-common. In practice, the latest code of this converter requires the latest version of onnxconverter-common, so if you install this converter from its source code, please install the onnxconverter-common in source code mode before keras2onnx installation.

Validated pre-trained Keras models

Most Keras models could be converted successfully by calling keras2onnx.convert_keras, including CV, GAN, NLP, Speech and etc. See the tutorial here. However some models with a lot of custom operations need custom conversion, the following are some examples, like YOLOv3, and Mask RCNN.

Scripts

It will be useful to convert the models from Keras to ONNX from a python script. You can use the following API:

import keras2onnx
keras2onnx.convert_keras(model, name=None, doc_string='', target_opset=None, channel_first_inputs=None):
    # type: (keras.Model, str, str, int, []) -> onnx.ModelProto
    """
    :param model: keras model
    :param name: the converted onnx model internal name
    :param doc_string:
    :param target_opset:
    :param channel_first_inputs: A list of channel first input.
    :return:
    """

Use the following script to convert keras application models to onnx, and then perform inference:

import numpy as np
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input
import keras2onnx
import onnxruntime

# image preprocessing
img_path = 'street.jpg'   # make sure the image is in img_path
img_size = 224
img = image.load_img(img_path, target_size=(img_size, img_size))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

# load keras model
from keras.applications.resnet50 import ResNet50
model = ResNet50(include_top=True, weights='imagenet')

# convert to onnx model
onnx_model = keras2onnx.convert_keras(model, model.name)

# runtime prediction
content = onnx_model.SerializeToString()
sess = onnxruntime.InferenceSession(content)
x = x if isinstance(x, list) else [x]
feed = dict([(input.name, x[n]) for n, input in enumerate(sess.get_inputs())])
pred_onnx = sess.run(None, feed)

The inference result is a list which aligns with keras model prediction result model.predict(). An alternative way to load onnx model to runtime session is to save the model first:

temp_model_file = 'model.onnx'
keras2onnx.save_model(onnx_model, temp_model_file)
sess = onnxruntime.InferenceSession(temp_model_file)

Contribute

We welcome contributions in the form of feedback, ideas, or code.

License

MIT License

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc