🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
Book a DemoInstallSign in
Socket

detection-lib

Package Overview
Dependencies
Maintainers
1
Versions
21
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

detection-lib - npm Package Compare versions

Comparing version

to
1.0.19

4

package.json
{
"name": "detection-lib",
"version": "1.0.16",
"version": "1.0.19",
"main": "src/DetectorFactory.js",

@@ -9,3 +9,3 @@

},
"keywords": [],
"keywords": ["face", "detection", "qr", "barcode", "modular"],
"author": "Modular detection library for face, QR, etc.",

@@ -12,0 +12,0 @@ "license": "ISC",

@@ -18,3 +18,30 @@

---
## Description
It is a modular JavaScript face detection library built on top of MediaPipe Face Detection. It provides an easy-to-use interface to detect faces from various input sources like video, image, or canvas, with built-in initialization, error handling, and bounding box extraction.
## How it Works
**currently only face detection part is implemented**
Internally, the library uses @mediapipe/face_detection to detect faces. The workflow is:
1. Load the detector (loads the MediaPipe model).
```
import { createDetector } from 'detection-lib';
const detector = await createDetector({ type: 'face' });
```
2. Initialize it (it send dummy static image so that all files needed are loaded).
```
await detector.initialize();
```
3. Run Detection on any HTML media element (input can be any image,HTMLCanvasElement,HTMLImageElement).
```
const result = await detectorRef.current.detect(input);
```
4. Receive Results as an object with:
- status: Numeric status code
- message: Human-readable string
- boxes: Array of bounding boxes with x, y, w, h, and score
## Installation

@@ -32,28 +59,73 @@

```sh
import { useEffect, useRef, useState } from 'react';
import { createDetector } from 'detection-lib';
// Create a face detector
const detector = await createDetector({ type: 'face' });
// Create references and state
const detectorRef = useRef(null);
const [detectorCreated, setDetectorCreated] = useState(false);
const [detectorReady, setDetectorReady] = useState(false);
//initalize it
await detector.initialize();
// Detect faces in a video element
const result = await detector.detect(videoElement);
// Step 1: Create the detector when component mounts
useEffect(() => {
const create = async () => {
const detector = await createDetector({ type: 'face' });
detectorRef.current = detector;
setDetectorCreated(true);
};
create();
if (result.type === 'face' && result.boxes) {
result.boxes.forEach(box => {
// Draw box on canvas, etc.
console.log(box);
});
// Cleanup on unmount
return () => {
setDetectorCreated(false);
detectorRef.current = null;
};
}, []);
// Step 2: Initialize the detector once created
useEffect(() => {
const initialize = async () => {
if (detectorCreated && detectorRef.current) {
await detectorRef.current.initialize();
setDetectorReady(true);
}
};
initialize();
// Cleanup
return () => setDetectorReady(false);
}, [detectorCreated]);
// Step 3: Run detection (call this function after detector is ready)
const runDetection = async (input) => {
if (detectorReady && detectorRef.current) {
const result = await detectorRef.current.detect(input);
if (result.type === 'face' && result.boxes) {
result.boxes.forEach((box) => {
// Example: Draw box or process coordinates
console.log('Detected Face Box:', box);
});
}
}
};
```
## Output Format
```
{
status: 200, // or other status code
message: 'OK', // or descriptive error
boxes: [
{
x: Number, // top-left x
y: Number, // top-left y
w: Number, // width
h: Number, // height
score: Number // confidence score (optional)
},
...
]
}
```
## API
#### createDetector(options)
- **options.type: 'face' | 'qr' | string** — The type of detector to create.
- Returns a detector instance with an async detect(input) method
### Detector Interface

@@ -66,7 +138,11 @@ All detectors implement:

### DetectionResult
- **type**: string — The detector type ('face', 'qr', etc.)
- **boxes**: Array of { x, y, w, h, score? } (for face, etc.)
- **data**: Any extra data (for QR, etc.)
### Internals
- Uses @mediapipe/face_detection under the hood
- Loads model assets from jsDelivr CDN
- Initialization uses a built-in static image to "warm up" the model
- Implements result caching to optimize repeated calls
### Extending

@@ -73,0 +149,0 @@ To add a new detector: