Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
@tensorflow/tfjs-converter
Advanced tools
@tensorflow/tfjs-converter is a library that allows you to import TensorFlow SavedModel and Keras models into TensorFlow.js. This enables you to run pre-trained models in the browser or in Node.js, making it easier to deploy machine learning models in web applications.
Import TensorFlow SavedModel
This feature allows you to import a TensorFlow SavedModel into TensorFlow.js. The code sample demonstrates how to load a SavedModel from a specified path and log the model to the console.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function loadModel() {
const model = await tfConverter.loadGraphModel('path/to/saved_model');
console.log(model);
}
loadModel();
Import Keras Model
This feature allows you to import a Keras model into TensorFlow.js. The code sample demonstrates how to load a Keras model from a specified JSON file and log the model to the console.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function loadKerasModel() {
const model = await tfConverter.loadLayersModel('path/to/keras_model.json');
console.log(model);
}
loadKerasModel();
Run Inference
This feature allows you to run inference using a loaded model. The code sample demonstrates how to load a SavedModel, create a tensor as input, run the model's predict function, and print the output.
const tf = require('@tensorflow/tfjs');
const tfConverter = require('@tensorflow/tfjs-converter');
async function runInference() {
const model = await tfConverter.loadGraphModel('path/to/saved_model');
const input = tf.tensor([1, 2, 3, 4]);
const output = model.predict(input);
output.print();
}
runInference();
@tensorflow/tfjs is the core TensorFlow.js library that provides the main functionalities for defining, training, and running machine learning models in the browser and Node.js. While @tensorflow/tfjs-converter focuses on importing pre-trained models, @tensorflow/tfjs provides the tools to build and train models from scratch.
onnxjs is a library for running ONNX (Open Neural Network Exchange) models in the browser and Node.js. It provides similar functionalities to @tensorflow/tfjs-converter but focuses on the ONNX model format instead of TensorFlow SavedModel and Keras models.
brain.js is a JavaScript library for neural networks that runs in the browser and Node.js. It provides functionalities for creating, training, and running neural networks, but it does not support importing TensorFlow or Keras models like @tensorflow/tfjs-converter.
TensorFlow.js converter is an open source library to load a pretrained TensorFlow SavedModel, Frozen Model, Session Bundle or TensorFlow Hub module into the browser and run inference through TensorFlow.js.
(Note: TensorFlow has deprecated session bundle format, please switch to SavedModel.)
A 2-step process to import your model:
$ pip install tensorflowjs
Usage:
SavedModel example:
$ tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/mobilenet/saved_model \
/mobilenet/web_model
Frozen model example:
$ tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/mobilenet/frozen_model.pb \
/mobilenet/web_model
Session bundle model example:
$ tensorflowjs_converter \
--input_format=tf_session_bundle \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
/mobilenet/session_bundle \
/mobilenet/web_model
Tensorflow Hub module example:
$ tensorflowjs_converter \
--input_format=tf_hub \
'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' \
/mobilenet/web_model
Keras h5 model example:
$ tensorflowjs_converter \
--input_format=keras \
/tmp/my_keras_model.h5 \
/tmp/my_tfjs_model
tf.keras SavedModel model example:
$ tensorflowjs_converter \
--input_format=keras_saved_model \
/tmp/my_tf_keras_saved_model/1542211770 \
/tmp/my_tfjs_model
Note that the input path used above is a subfolder that has a Unix epoch time (1542211770) and is generated automatically by tensorflow when it saved a tf.keras model in the SavedModel format.
Positional Arguments | Description |
---|---|
input_path | Full path of the saved model directory, session bundle directory, frozen model file or TensorFlow Hub module handle or path. |
output_path | Path for all output artifacts. |
Options | Description |
---|---|
--input_format | The format of input model, use tf_saved_model for SavedModel, tf_frozen_model for frozen model, tf_session_bundle for session bundle, tf_hub for TensorFlow Hub module, tfjs_layers_model for TensorFlow.js JSON format, and keras for Keras HDF5. |
--output_node_names | The names of the output nodes, separated by commas. |
--output_format | The desired output format. Must be tfjs_layers_model , tfjs_graph_model or keras . Not all pairs of input-output formats are supported. Please file a github issue if your desired input-output pair is not supported. |
--saved_model_tags | Only applicable to SavedModel conversion. Tags of the MetaGraphDef to load, in comma separated format. Defaults to serve . |
--signature_name | Only applicable to TensorFlow Hub module conversion, signature to load. Defaults to default . See https://www.tensorflow.org/hub/common_signatures/. |
--strip_debug_ops | Strips out TensorFlow debug operations Print , Assert , CheckNumerics . Defaults to True . |
--quantization_bytes | How many bytes to optionally quantize/compress the weights to. Valid values are 1 and 2. which will quantize int32 and float32 to 1 or 2 bytes respectively. The default (unquantized) size is 4 bytes. |
Note: Unless stated otherwise, we can infer the value of --output_format
from the
value of --input_format
. So the --output_format
flag can be omitted in
most cases.
--input_format | --output_format | Description |
---|---|---|
keras | tfjs_layers_model | Convert a keras or tf.keras HDF5 model file to TensorFlow.js Layers model format. Use tf.lodLayersModel() to load the model in JavaScript. |
keras_saved_model | tfjs_layers_model | Convert a tf.keras SavedModel model file (from tf.contrib.saved_model.save_keras_model ) to TensorFlow.js Layers model format. Use tf.lodLayersModel() to load the model in JavaScript. |
tf_frozen_model | tfjs_graph_model | Convert a TensorFlow Frozen Graph (.pb) file to TensorFlow.js graph model format. Use tf.loadGraphModel() to load the converted model in JavaScript. |
tf_hub | tfjs_graph_model | Convert a TF-Hub model file to TensorFlow.js graph model format. Use tf.loadGraphModel() to load the converted model in JavaScript. |
tf_saved_model | tfjs_graph_model | Convert a TensorFlow SavedModel to TensorFlow.js graph model format. Use tf.loadGraphModel() to load the converted model in JavaScript. |
tf_session_bundle | tfjs_graph_model | Convert a TensorFlow Session Bundle to TensorFlow.js graph model format. Use tf.loadGraphModel() to load the converted model in JavaScript. |
--input_format | --output_format | Description |
---|---|---|
tfjs_layers_model | keras | Convert a TensorFlow.js Layers model (JSON + binary weight file(s)) to a Keras HDF5 model file. Use keras.model.load_model() or tf.keras.models.load_model() to load the converted model in Python. |
The conversion script above produces 2 types of files:
model.json
(the dataflow graph and weight manifest file)group1-shard\*of\*
(collection of binary weight files)For example, here is the MobileNet model converted and served in following location:
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/model.json
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard1of5
...
https://storage.cloud.google.com/tfjs-models/savedmodel/mobilenet_v1_1.0_224/group1-shard5of5
Instantiate the GraphModel class and run inference.
import * as tf from '@tensorflow/tfjs';
const MODEL_URL = 'https://.../mobilenet/model.json';
const model = await tf.loadGraphModel(MODEL_URL);
const cat = document.getElementById('cat');
model.execute({input: tf.browser.fromPixels(cat)});
Check out our working MobileNet demo.
If your server requests credentials for accessing the model files, you can provide the optional RequestOption param.
const model = await loadGraphModel(MODEL_URL,
{credentials: 'include'});
Please see fetch() documentation for details.
TensorFlow.js can be used from Node.js. See
the tfjs-node project for more details.
Unlike web browsers, Node.js can access the local file system directly.
Therefore, you can load the same frozen model from local file system into
a Node.js program running TensorFlow.js. This is done by calling loadGraphModel
with the path
to the model files:
// Load the tfjs-node binding
import * as tf from '@tensorflow/tfjs-node';
const MODEL_PATH = 'file:///tmp/mobilenet/model.json';
const model = await tf.loadGraphModel(MODEL_PATH);
You can also load the remote model files the same way as in browser, but you might need to polyfill the fetch() method.
Currently TensorFlow.js only supports a limited set of TensorFlow Ops. See the
full list.
If your model uses an unsupported ops, the tensorflowjs_converter
script will fail and
produce a list of the unsupported ops in your model. Please file issues to let us
know what ops you need support with.
If you prefer to load the weights only, you can use follow code snippet.
import * as tf from '@tensorflow/tfjs';
const modelUrl = "https://example.org/model/model.json";
const model = await fetch(modelUrl);
this.weightManifest = (await model.json())['weightsManifest'];
const weightMap = await tf.io.loadWeights(
this.weightManifest, "https://example.org/model");
Image-based models (MobileNet, SqueezeNet, add more if you tested) are the most supported. Models with control flow ops (e.g. RNNs) are also supported. The tensorflowjs_converter script will validate the model you have and show a list of unsupported ops in your model. See this list for which ops are currently supported.
While the browser supports loading 100-500MB models, the page load time, the inference time and the user experience would not be great. We recommend using models that are designed for edge devices (e.g. phones). These models are usually smaller than 30MB.
Yes, we are splitting the weights into files of 4MB chunks, which enable the browser to cache them automatically. If the model architecture is less than 4MB (most models are), it will also be cached.
Yes, you can use the --quantization_bytes option to compress int32/float32 to 1 or 2 bytes. Here is an example of 8-bit quantization:
tensorflowjs_converter \
--input_format=tf_hub \
--quantization_bytes=1
'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' \
/mobilenet/web_model
The time of first call also includes the compilation time of WebGL shader programs for the model. After the first call the shader programs are cached, which makes the subsequent calls much faster. You can warm up the cache by calling the predict method with an all zero inputs, right after the completion of the model loading.
To build TensorFlow.js converter from source, we need to clone the project and prepare the dev environment:
$ git clone https://github.com/tensorflow/tfjs-converter.git
$ cd tfjs-converter
$ yarn # Installs dependencies.
We recommend using Visual Studio Code for
development. Make sure to install
TSLint VSCode extension
and the npm clang-format 1.2.2
or later
with the
Clang-Format VSCode extension
for auto-formatting.
Before submitting a pull request, make sure the code passes all the tests and is clean of lint errors:
$ yarn test
$ yarn lint
To run a subset of tests and/or on a specific browser:
$ yarn test --browsers=Chrome --grep='execute'
> ...
> Chrome 64.0.3282 (Linux 0.0.0): Executed 39 of 39 SUCCESS (0.129 secs / 0 secs)
To run the tests once and exit the karma process (helpful on Windows):
$ yarn test --single-run
FAQs
Tensorflow model converter for javascript
The npm package @tensorflow/tfjs-converter receives a total of 164,985 weekly downloads. As such, @tensorflow/tfjs-converter popularity was classified as popular.
We found that @tensorflow/tfjs-converter demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 10 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.