
Security News
CVE Volume Surges Past 48,000 in 2025 as WordPress Plugin Ecosystem Drives Growth
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.
@tensorflow-models/tasks
Advanced tools
WORK IN PROGRESS
TFJS Task API provides an unified experience for running task-specific models on the Web. It is designed with ease-of-use in mind, aiming to improve usability for JS developers without ML knowledge. It has the following features:
Easy-to-discover models
Models from different runtime systems (e.g. TFJS, TFLite, MediaPipe, etc) are grouped by popular ML tasks, such as sentiment detection, image classification, pose detection, etc.
Clean and powerful APIs
Different tasks come with different API interfaces that are the most intuitive to use for that particular task. Models under the same task share the same API, making it easy to explore. Inference can be done within just 3 lines of code.
Simple installation
You only need to import this package (<20K in size) to start using the API without needing to worry about other dependencies, such as model packages, runtimes, backends, etc. They will be dynamically loaded on demand without duplication.
The following table summarizes all the supported tasks and their models:
| Task | Model | Supported runtimes · Docs · Resources |
|---|---|---|
|
Image Classification
Identify images into predefined classes. Demo | Mobilenet | |
| Custom model | ||
|
Object Detection
Localize and identify multiple objects in a single image. Demo | COCO-SSD | |
| Custom model | ||
|
Image Segmentation
Predict associated class for each pixel of an image. Demo | Deeplab | |
| Custom model | ||
|
Sentiment Detection
Detect pre-defined sentiments in a given paragraph of text. Demo | Toxicity | |
| Movie review | ||
|
NL Classification
Identify texts into predefined classes. Demo | Custom model | |
|
Question & Answer
Answer questions based on the content of a given passage. Demo | BertQA |
(The initial version only supports the web browser environment. NodeJS support is coming soon)
This package is all you need. The packages required by different models will be loaded on demand automatically.
// Import @tensorflow-models/tasks.
import * as tfTask from '@tensorflow-models/tasks';
<!-- Import @tensorflow-models/tasks -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/tasks"></script>
The code snippet below shows how to load various models for the
Image Classification task:
import * as tfTask from '@tensorflow-models/tasks';
// Load the TFJS mobilenet model.
const model1 = await tfTask.ImageClassification.MobileNet.TFJS.load({
backend: 'wasm'});
// Load the TFLite mobilenet model.
const model2 = await tfTask.ImageClassification.MobileNet.TFLite.load();
// Load a custom image classification TFLite model.
const model3 = await tfTask.ImageClassification.CustomModel.TFLite.load({
model: 'url/to/your/bird_classifier.tflite'});
Since all these models are for the Image Classification task, they will have
the same task model type: ImageClassifier in
this case. Each task model's predict inference method has an unique and
easy-to-use API interface. For example, in ImageClassifier, the method takes an
image-like element and returns the predicted classes:
const result = model1.predict(document.querySelector(img)!);
console.log(result.classes);
TFLite is supported by the @tensorflow/tfjs-tflite package that
is built on top of the TFLite Task Library and
WebAssembly. As a result, all TFLite custom models should comply with the
metadata requirements of the corresonding task in the TFLite task library.
Check out the "model compatibility requirements" section of the official task
library page. For example, the requirements of ImageClassifier can be found
here.
See an example of how to use TFLite custom model in the Load model and run inference section above.
For TFJS models, the choice of backend affects the performance the most. For most cases, the WebGL backend (default) is usually the fastest.
For TFLite models, we use WebAssembly under the hood. It uses XNNPACK
to accelerate model inference. To achieve the best performance, use a browser
that supports "WebAssembly SIMD" and "WebAssembly threads". In Chrome, these can
be enabled in chrome://flags/. The task API will automatically choose the best
WASM module to load and set the number of threads for best performance based on
the current browser environment.
As of March 2021, XNNPACK works best for non-quantized TFLite models. Quantized models can still be used, but XNNPACK only supports ADD, CONV_2D, DEPTHWISE_CONV_2D, and FULLY_CONNECTED ops for models with quantization-aware training using TF MOT.
$ yarn
$ yarn build
$ yarn test
$ yarn build-npm
# (TODO): publish
FAQs
Tensorflow.js tasks API
The npm package @tensorflow-models/tasks receives a total of 150 weekly downloads. As such, @tensorflow-models/tasks popularity was classified as not popular.
We found that @tensorflow-models/tasks demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 9 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
CVE disclosures hit a record 48,185 in 2025, driven largely by vulnerabilities in third-party WordPress plugins.

Security News
Socket CEO Feross Aboukhadijeh joins Insecure Agents to discuss CVE remediation and why supply chain attacks require a different security approach.

Security News
Tailwind Labs laid off 75% of its engineering team after revenue dropped 80%, as LLMs redirect traffic away from documentation where developers discover paid products.