Socket
Socket
Sign inDemoInstall

inferencejs

Package Overview
Dependencies
70
Maintainers
1
Versions
10
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

    inferencejs

An edge library for deploying computer vision applications built with Roboflow to JS environments.


Version published
Weekly downloads
37
decreased by-33.93%
Maintainers
1
Created
Weekly downloads
 

Readme

Source

Roboflow Inference JS

An edge library for deploying computer vision applications built with Roboflow to JS environments.

Installation

This library is designed to be used within the browser, using a bundler such as vite, webpack, parcel, etc. Assuming your bundler is set up, you can install by executing:

npm install inferencejs

Getting Started

Begin by initializing the InferenceEngine. This will start a background worker which is able to download and execute models without blocking the user interface.

    import { InferenceEngine } from "inferencejs"

    const PUBLISHABLE_KEY = "rf_a6cd..."; // replace with your own publishable key from Roboflow

    const inferEngine = new InferenceEngine();
    const workerId = await inferEngine.startWorker("gaze", 1, PUBLISHABLE_KEY);


    //make inferences against the model
    const result = await inferEngine.infer(workerId, img)

API

InferenceEngine

new InferenceEngine()

Creates a new InferenceEngine instance.

startWorker(modelName: string, modelVersion: number, publishableKey: string): Promise<number>

Starts a new worker for the given model and returns the workerId. Important- publishableKey is required and can be obtained from Roboflow in your project settings folder.

infer(workerId: number, img: ImageBitmap): Promise<GazeDetections>

Infer an image using the worker with the given workerId. img can be created using createImageBitmap

stopWorker(workerId: number): Promise<void>

Stops the worker with the given workerId.

GazeDetections

The result of making an inference using the InferenceEngine on a Gaze model. An array with the following type:

type GazeDetections = {
    leftEye: { x: number, y: number },
    rightEye: { x: number, y: number },
    yaw: number,
    pitch: number
}[]
leftEye.x

The x position of the left eye as a floating point number between 0 and 1, measured in percentage of the input image width.

leftEye.y

The y position of the left eye as a floating point number between 0 and 1, measured in percentage of the input image height.

rightEye.x

The x position of the right eye as a floating point number between 0 and 1, measured in percentage of the input image width.

rightEye.y

The y position of the right eye as a floating point number between 0 and 1, measured in percentage of the input image height.

yaw

The yaw of the visual gaze, measured in radians.

pitch

The pitch of the visual gaze, measured in radians.

Example

A fully functional example can be found in demo/demo.ts

Developing

To start the local development server, execute npm run dev to run the demo in development mode, with hot reloading.

To run library tests execute npm run test. To run the index.html file in dev mode with vite packaging run npm run dev.

FAQs

Last updated on 04 Apr 2024

Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc