Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

retinafacex

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

retinafacex

RetinaFaceX (X-extended): Lightweight Face Detection Library

  • 0.1.2
  • PyPI
  • Socket score

Maintainers
1

RetinaFaceX (X-extended): Lightweight Face Detection Library

License Python PyPI Version Build Status Downloads Code Style: PEP8

RetinaFaceX is a lightweight face detection library designed for high-performance face localization and landmark detection. The library supports ONNX models and provides utilities for bounding box visualization and landmark plotting. To train RetinaFace model, see https://github.com/yakhyo/retinaface-pytorch.


Features

  • High-speed face detection using ONNX models.
  • Accurate facial landmark localization (e.g., eyes, nose, and mouth).
  • Easy-to-use API for inference and visualization.
  • Customizable confidence thresholds for bounding box filtering.

Installation

Using pip

pip install retinafacex

Local installation using pip

Clone the repository

git clone https://github.com/yakhyo/retinafacex.git
cd retinafacex

Install using pip

pip install .

Quick Start

Initialize the Model

from retinafacex import RetinaFace

# Initialize the RetinaFace model
retinaface_inference = RetinaFace(
    model="retinaface_mnet_v2",  # Model name
    conf_thresh=0.5,            # Confidence threshold
    pre_nms_topk=5000,          # Pre-NMS Top-K detections
    nms_thresh=0.4,             # NMS IoU threshold
    post_nms_topk=750           # Post-NMS Top-K detections
)

Run Inference

import cv2
from retinafacex.visualization import draw_detections

# Load an image
image_path = "assets/test.jpg"
original_image = cv2.imread(image_path)

# Perform inference
boxes, landmarks = retinaface_inference.detect(original_image)

# Visualize results
draw_detections(original_image, (boxes, landmarks), vis_threshold=0.6)

# Save the output image
output_path = "output.jpg"
cv2.imwrite(output_path, original_image)
print(f"Saved output image to {output_path}")

Evaluation results of available models on WiderFace

RetinaFace ONNX BackbonesEasyMediumHard
retinaface_mnetv1_02588.48%87.02%80.61%
retinaface_mnetv1_05089.42%87.97%82.40%
retinaface_mnetv190.59%89.14%84.13%
retinaface_mnetv291.70%91.03%86.60%
retinaface_r1892.50%91.02%86.63%
retinaface_r3494.16%93.12%88.90%

API Reference

RetinaFace Class

Initialization
RetinaFace(
    model: str,
    conf_thresh: float = 0.5,
    pre_nms_topk: int = 5000,
    nms_thresh: float = 0.4,
    post_nms_topk: int = 750
)
  • model: Model name (e.g., retinaface_mnet_v2).
    • retinaface_mnet025
    • retinaface_mnet050
    • retinaface_mnet_v1
    • retinaface_mnet_v2
    • retinaface_r18
    • retinaface_r34
  • conf_thresh: Minimum confidence threshold for detections.
  • pre_nms_topk: Maximum number of detections to keep before NMS.
  • nms_thresh: IoU threshold for Non-Maximum Suppression.
  • post_nms_topk: Maximum number of detections to keep after NMS.
detect(image: np.ndarray, max_num: Optional[int] = 0, metric: Literal["default", "max"] = "default", center_weight: Optional[float] = 2.0) -> Tuple[np.ndarray, np.ndarray]
  • Description: Performs face detection on the input image and returns bounding boxes and landmarks for detected faces.

  • Inputs:

    • image (np.ndarray): The input image as a NumPy array in BGR format.
    • max_num (Optional[int], default=0): The maximum number of faces to return. If 0, all detected faces are returned.
    • metric (Literal["default", "max"], default="default"): The metric for prioritizing detections:
      • "default": Prioritize detections closer to the image center.
      • "max": Prioritize detections with larger bounding box areas.
    • center_weight (Optional[float], default=2.0): A weight factor for prioritizing faces closer to the center of the image.
  • Outputs:

    • Tuple[np.ndarray, np.ndarray]: A tuple containing:
      • bounding_boxes (np.ndarray): An array of bounding boxes, each represented as [x_min, y_min, x_max, y_max, confidence].
      • landmarks (np.ndarray): An array of facial landmarks, each represented as [(x1, y1), ..., (x5, y5)].

Visualization Utilities

draw_detections(original_image, detections, vis_threshold)

  • Draws bounding boxes and landmarks on the image.
  • Filters detections below the confidence threshold.

Contributing

We welcome contributions to enhance the library! Feel free to:

  • Submit bug reports or feature requests.
  • Fork the repository and create a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.


Acknowledgments


Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc