Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@tensorflow-models/blazeface

Package Overview
Dependencies
Maintainers
6
Versions
8
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@tensorflow-models/blazeface

Pretrained face detection model in TensorFlow.js

  • 0.0.2
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
6.8K
increased by5.33%
Maintainers
6
Weekly downloads
 
Created
Source

Blazeface detector

Blazeface is a lightweight model that detects faces in images. Blazeface makes use of the Single Shot Detector architecture with a custom encoder. The model may serve as a first step for face-related computer vision applications, such as facial keypoint recognition.

demo

More background information about the model, as well as its performance characteristics on different datasets, can be found here: https://drive.google.com/file/d/1f39lSzU5Oq-j_OXgS67KfN5wNsoeAZ4V/view

The model is designed for front-facing cameras on mobile devices, where faces in view tend to occupy a relatively large fraction of the canvas. Blazeface may struggle to identify far-away faces.

Check out our demo, which uses the model to predict facial bounding boxes from a live video stream.

Installation

Using yarn:

$ yarn add @tensorflow-models/blazeface

Using npm:

$ npm install @tensorflow-models/blazeface

Note that this package specifies @tensorflow/tfjs-core and @tensorflow/tfjs-converter as peer dependencies, so they will also need to be installed.

Usage

To import in npm:

import * as blazeface from '@tensorflow-models/blazeface';

or as a standalone script tag:

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/blazeface"></script>

Then:


async function main() {
  // Load the model.
  const model = await blazeface.load();

  // Pass in an image or video to the model. The model returns an array of
  // bounding boxes, probabilities, and landmarks, one for each detected face.

  const returnTensors = false; // Pass in `true` to get tensors back, rather than values.
  const predictions = await model.estimateFaces(document.querySelector("img"), returnTensors);

  if (predictions.length > 0) {
    /*
    `predictions` is an array of objects describing each detected face, for example:

    [
      {
        topLeft: [232.28, 145.26],
        bottomRight: [449.75, 308.36],
        probability: [0.998],
        landmarks: [
          [295.13, 177.64], // right eye
          [382.32, 175.56], // left eye
          [341.18, 205.03], // nose
          [345.12, 250.61], // mouth
          [252.76, 211.37], // right ear
          [431.20, 204.93] // left ear
        ]
      }
    ]
    */

    for (let i = 0; i < predictions.length; i++) {
      const start = predictions[i].topLeft;
      const end = predictions[i].bottomRight;
      const size = [end[0] - start[0], end[1] - start[1]];

      // Render a rectangle over each detected face.
      ctx.fillRect(start[0], start[1], size[0], size[1]);
    }
  }
}

main();

FAQs

Package last updated on 02 Jan 2020

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc