Socket
Book a DemoInstallSign in
Socket

@litertjs/core

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@litertjs/core

LiteRT.js package

0.1.0
latest
Source
npmnpm
Version published
Maintainers
1
Created
Source

@litertjs/core

Core web runtime for LiteRT (previously TFLite), Google's open-source high performance runtime for on-device AI.

This package provides the core functionality to load and run arbitrary LiteRT (.tflite) models directly in the browser, with support for WebGPU and CPU acceleration.

This is the primary package for LiteRT.js. For integration with TensorFlow.js, see @litertjs/tfjs-interop. To test and benchmark your models, check out the @litertjs/model-tester.

Features

  • Load and run a LiteRT (.tflite) model.
  • Run with WebGPU acceleration on supported browsers.
  • Run with (XNNPack)-accelerated CPU kernels on any browser.
  • Slot into existing TensorFlow.js pipelines as a replacement for TFJS Graph Models. See @litertjs/tfjs-interop package for details.

Usage

For a complete guide, see our docs at ai.google.dev/edge/litert/web.

The following code snippet loads LiteRT.js, loads a MobileNetV2 model, and runs it on a sample input tensor.

import {loadLiteRt, loadAndCompile, Tensor} from '@litertjs/core';

// Initialize LiteRT.js's Wasm files.
// These files are located in `node_modules/@litertjs/core/wasm/`
// and need to be served by your web server.
await loadLiteRt('/path/to/wasm/directory/');

const model = await loadAndCompile(
  '/path/to/your/model/torchvision_mobilenet_v2.tflite',
  {accelerator: 'webgpu'},
);

// Create an input tensor.
const inputTypedArray = new Float32Array(1 * 3 * 224 * 224);
const inputTensor = new Tensor(inputTypedArray, [1, 3, 224, 224]);

// Move the tensor to the GPU for a WebGPU-accelerated model.
const gpuTensor = await inputTensor.moveTo('webgpu');

// Run the model.
const results = model.run(gpuTensor);

// All tensors that are not moved to another backend with `moveTo` must
// eventually be freed with `.delete()`.
gpuTensor.delete();

// Move the result back to the CPU to read the data.
const result = results[0].moveTo('wasm');
console.log(result.toTypedArray());

// Clean up the result tensor.
result.delete();

FAQs

Package last updated on 08 Aug 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.