Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

hgd

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

hgd

Detection and classification of head gestures in videos

  • 0.4.0
  • PyPI
  • Socket score

Maintainers
1

ci License MIT 1.0

Introduction

The Head Gesture Detection (HGD) library provides a pre-trained model and a simple inference API for detecting head gestures in short videos. Under the hood, it uses Google MediaPipe for collecting the landmark features.

Installation

Tested for Python 3.8, 3.9, and 3.10.

The best way to install HGD with its dependencies is from PyPI:

python3 -m pip install --upgrade hgd

Alternatively, to obtain the latest version from this repository:

git clone git@github.com:bhky/head-gesture-detection.git
cd head-gesture-detection
python3 -m pip install .

Usage

An easy way to try this library and the pre-trained model is to make a short video with your head gesture.

Webcam

The code snippet below will perform the following:

  • Search for the pre-trained weights file from $HOME/.hgd/weights, if not exists, the file will be downloaded from this repository.
  • Start webcam.
  • Collect the needed number of frames (default 60) for the model.
  • End webcam automatically (or you can press q to end earlier).
  • Make prediction of your head gesture and print the result to STDOUT.
from hgd.inference import predict_video

result = predict_video()
print(result)

Video file

Alternatively, you could provide a pre-recorded video file:

from hgd.inference import predict_video

result = predict_video(
  "your_head_gesture_video.mp4",
  from_beginning=False,
  motion_threshold=0.5,  # Optionally tune the thresholds.
  gesture_threshold=0.9
)
# The `from_beginning` flag controls whether the needed frames will be obtained
# from the beginning or toward the end of the video.
# Thresholds can be adjusted as needed, see explanation below.

Result format

The result is returned as a Python dictionary.

{
  'gesture': 'turning',
  'probabilities': {
    'has_motion': 1.0,
    'gestures': {
      'nodding': 0.009188028052449226,
      'turning': 0.9908120036125183
    }
  }
}

Head gestures

The following gesture types are available:

  • nodding - Repeatedly tilt your head upward and downward.
  • turning - Repeatedly turn your head leftward and rightward.
  • stationary - Not tilting or turning your head; translation motion is still treated as stationary.
  • undefined - Unrecognised gesture or no landmarks detected (usually means no face is shown).

To determine the final gesture:

  • If has_motion probability is smaller than motion_threshold (default 0.5), gesture is stationary. Other probabilities are irrelevant.
  • Otherwise, we will look for the largest probability from gestures:
    • If it is smaller than gesture_threshold (default 0.9), gesture is undefined,
    • else, the corresponding gesture label is selected (e.g., nodding).
  • If no landmarks are detected in the video, gesture is undefined. The probabilities dictionary is empty.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc