What is @tensorflow/tfjs-converter?
@tensorflow/tfjs-converter is a library that allows you to import TensorFlow SavedModel and Keras models into TensorFlow.js. This enables you to run pre-trained models in the browser or in Node.js, making it easier to deploy machine learning models in web applications.
What are @tensorflow/tfjs-converter's main functionalities?
Import TensorFlow SavedModel
This feature allows you to import a TensorFlow SavedModel into TensorFlow.js. The code sample demonstrates how to load a SavedModel from a specified path and log the model to the console.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function loadModel() {
const model = await tfConverter.loadGraphModel('path/to/saved_model');
console.log(model);
}
loadModel();
Import Keras Model
This feature allows you to import a Keras model into TensorFlow.js. The code sample demonstrates how to load a Keras model from a specified JSON file and log the model to the console.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function loadKerasModel() {
const model = await tfConverter.loadLayersModel('path/to/keras_model.json');
console.log(model);
}
loadKerasModel();
Run Inference
This feature allows you to run inference using a loaded model. The code sample demonstrates how to load a SavedModel, create a tensor as input, run the model's predict function, and print the output.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function runInference() {
const model = await tfConverter.loadGraphModel('path/to/saved_model');
const input = tf.tensor([1, 2, 3, 4]);
const output = model.predict(input);
output.print();
}
runInference();
Other packages similar to @tensorflow/tfjs-converter
@tensorflow/tfjs
@tensorflow/tfjs is the core TensorFlow.js library that provides the main functionalities for defining, training, and running machine learning models in the browser and Node.js. While @tensorflow/tfjs-converter focuses on importing pre-trained models, @tensorflow/tfjs provides the tools to build and train models from scratch.
onnxjs
onnxjs is a library for running ONNX (Open Neural Network Exchange) models in the browser and Node.js. It provides similar functionalities to @tensorflow/tfjs-converter but focuses on the ONNX model format instead of TensorFlow SavedModel and Keras models.
brain.js
brain.js is a JavaScript library for neural networks that runs in the browser and Node.js. It provides functionalities for creating, training, and running neural networks, but it does not support importing TensorFlow or Keras models like @tensorflow/tfjs-converter.
Getting started
TensorFlow.js converter is an open source library to load a pretrained
TensorFlow SavedModel
into the browser and run inference through TensorFlow.js.
A 2-step process to import your model:
- A python script that converts from a TensorFlow
SavedModel to a web friendly format. If you already have a converted model, or
are using an already hosted model (e.g. MobileNet), skip this step.
- Javascript API, for loading and running inference.
Step 1: Converting a SavedModel to a web-friendly format
- Clone the github repo:
$ git clone git@github.com:tensorflow/tfjs-converter.git
- Install following pip packages:
$ pip install tensorflow numpy absl-py protobuf
- Run the
convert.py
script
$ cd tfjs-converter/
$ python scripts/convert.py \
--saved_model_dir=/tmp/mobilenet/ \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--output_graph=/tmp/mobilenet/web_model.pb \
--saved_model_tags=serve
Options | Description |
---|
saved_model_dir | Full path of the saved model directory |
output_node_names | The names of the output nodes, comma separated |
output_graph | Full path of the name for the output graph file |
saved_model_tags | Tags of the MetaGraphDef to load, in comma separated format. Defaults to serve . |
Web-friendly format
The conversion script above produces 3 types of files:
web_model.pb
(the dataflow graph)weights_manifest.json
(weight manifest file)group1-shard\*of\*
(collection of binary weight files)
For example, here is the MobileNet model converted and served in
following location:
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/optimized_model.pb
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/weights_manifest.json
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard1of5
...
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard5of5
Step 2: Loading and running in the browser
- Install the tfjs-converter npm package
yarn add @tensorflow/tfjs-converter
or npm install @tensorflow/tfjs-converter
- Instantiate the TFModel class and run inference.
import * as tfc from '@tensorflow/tfjs-core';
import {TFModel} from '@tensorflow/tfjs-converter';
const MODEL_URL = 'https://.../mobilenet/web_model.pb';
const WEIGHTS_URL = 'https://.../mobilenet/weights_manifest.json';
const model = new TFModel(MODEL_URL, WEIGHTS_URL);
const cat = document.getElementById('cat');
model.predict({input: tfc.fromPixels(cat)});
Check out our working MobileNet demo.
Supported operations
Currently TensorFlow.js only supports a limited set of TensorFlow Ops. See the
full list.
If your model uses an unsupported ops, the convert.py
script will fail and
produce a list of the unsupported ops in your model. Please file issues to let us
know what ops you need support with.
FAQ
- What TensorFlow models does the converter currently support?
Image-based models (MobileNet, SqueezeNet, add more if you tested) are the most supported. Models with control flow ops (e.g. RNNs) are not yet supported. The convert.py script will validate the model you have and show a list of unsupported ops in your model. See this list for which ops are currently supported.
- Will model with large weights work?
While the browser supports loading 100-500MB models, the page load time, the inference time and the user experience would not be great. We recommend using models that are designed for edge devices (e.g. phones). These models are usually smaller than 30MB.
- Will the model and weight files be cached in the browser?
Yes, we are splitting the weights into files of 4MB chunks, which enable the browser to cache them automatically. If the model architecture is less than 4MB (most models are), it will also be cached.
- Will it support model with quantization?
Not yet. We are planning to add quantization support soon.
- Why the predict() method for inference is so much slower on the first time then the subsequent calls?
The time of first call also includes the compilation time of WebGL shader programs for the model. After the first call the shader programs are cached, which makes the subsequent calls much faster. You can warm up the cache by calling the predict method with an all zero inputs, right after the completion of the model loading.
Development
To build TensorFlow.js converter from source, we need to clone the project and prepare
the dev environment:
$ git clone https://github.com/tensorflow/tfjs-converter.git
$ cd tfjs-converter
$ yarn
We recommend using Visual Studio Code for
development. Make sure to install
TSLint VSCode extension
and the npm clang-format 1.2.2
or later
with the
Clang-Format VSCode extension
for auto-formatting.
Before submitting a pull request, make sure the code passes all the tests and is clean of lint errors:
$ yarn test
$ yarn lint
To run a subset of tests and/or on a specific browser:
$ yarn test --browsers=Chrome --grep='execute'
> ...
> Chrome 64.0.3282 (Linux 0.0.0): Executed 39 of 39 SUCCESS (0.129 secs / 0 secs)
To run the tests once and exit the karma process (helpful on Windows):
$ yarn test --single-run