@litertjs/tfjs-interop
Utility package for using
@litertjs/core with
TensorFlow.js.
This package provides helper functions to allow seamless interoperability
between the LiteRT.js and TensorFlow.js libraries. You can use it to run a
LiteRT model using TFJS tensors as inputs and receiving TFJS tensors as outputs,
making it easy to integrate LiteRT.js into an existing TFJS pipeline.
Prerequisites
This package has peer dependencies on @litertjs/core
, @tensorflow/tfjs
, and
@tensorflow/tfjs-backend-webgpu
. You must have these installed in your
project.
npm install @litertjs/core @litertjs/tfjs-interop @tensorflow/tfjs @tensorflow/tfjs-backend-webgpu
Usage
For a complete guide, see our Get Started section on
ai.google.dev.
Setup
Before you can run a model, you must initialize both TensorFlow.js and
LiteRT.js. To enable efficient GPU tensor conversion, you must also configure
LiteRT.js to use the same WebGPU device as the TFJS WebGPU backend.
import {loadLiteRt, liteRt} from '@litertjs/core';
import * as tf from '@tensorflow/tfjs';
import '@tensorflow/tfjs-backend-webgpu';
import { WebGPUBackend } from '@tensorflow/tfjs-backend-webgpu';
await tf.setBackend('webgpu');
await loadLiteRt('/path/to/wasm/directory/');
const backend = tf.backend() as WebGPUBackend;
liteRt.setWebGpuDevice(backend.device);
Running a Model with TFJS Tensors
Once set up, you can use the runWithTfjsTensors
function to wrap a LiteRT
model.run
call. This function handles the conversion of TFJS input tensors to
LiteRT tensors and converts the LiteRT output tensors back into TFJS tensors.
import {runWithTfjsTensors} from '@litertjs/tfjs-interop';
import {loadAndCompile} from '@litertjs/core';
import * as tf from '@tensorflow/tfjs';
const model = await loadAndCompile(
'/path/to/your/model/torchvision_mobilenet_v2.tflite',
{accelerator: 'webgpu'},
);
console.log(model.getInputDetails());
console.log(model.getOutputDetails());
const input = tf.randomUniform([1, 3, 224, 224]);
let results = runWithTfjsTensors(model, input);
await results[0].data();
results[0].print();
results[0].dispose();
results = runWithTfjsTensors(model, [input]);
await results[0].data();
results[0].print();
results[0].dispose();
let resultsObject = runWithTfjsTensors(model, {
'serving_default_args_0:0': input,
});
const result = resultsObject['StatefulPartitionedCall:0'];
await result.data();
result.print();
result.dispose();
console.log(model.signatures);
results = runWithTfjsTensors(model, 'serving_default', input);
await results[0].data();
results[0].print();
results[0].dispose();
const signature = model.signatures['serving_default'];
results = runWithTfjsTensors(signature, input);
await results[0].data();
results[0].print();
results[0].dispose();