Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@yoonit/nativescript-camera

Package Overview
Dependencies
Maintainers
11
Versions
26
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@yoonit/nativescript-camera

Yoonit Camera have a custom view that shows a preview layer of the front/back camera and detects human faces in it and read qr code.

  • 3.0.0
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
30
increased by400%
Maintainers
11
Weekly downloads
 
Created
Source

NativeScript Yoonit Camera

NativeScript Version Downloads

Android iOS MIT license

A NativeScript plugin to provide:

  • Modern Android Camera API Camera X
  • Camera preview (Front & Back)
  • PyTorch integration (Android)
  • Computer vision pipeline
  • Face detection, capture and image crop
  • Understanding of the human face
  • Frame capture
  • Capture timed images
  • QR Code scanning

Table Of Contents

Installation

npm i -s @yoonit/nativescript-camera  

Usage

All the functionalities that the @yoonit/nativescript-camera provides is accessed through the YoonitCamera component, that includes the camera preview. Below we have the basic usage code, for more details, your can see the Methods, Events or the Demo Vue.

VueJS Plugin

main.js

import Vue from 'nativescript-vue'  
import YoonitCamera from '@yoonit/nativescript-camera/vue'  
  
Vue.use(YoonitCamera)  

After that, you can access the camera object in your entire project using this.$yoo.camera

Vue Component

App.vue

<template>
  <Page @loaded="onLoaded">
    <YoonitCamera
      ref="yooCamera"
      lens="front"
      captureType="face"
      imageCapture=true
      imageCaptureAmount=10
      imageCaptureInterval=500
      detectionBox=true
      @faceDetected="doFaceDetected"
      @imageCaptured="doImageCaptured"
      @endCapture="doEndCapture"
      @qrCodeContent="doQRCodeContent"
      @status="doStatus"
      @permissionDenied="doPermissionDenied"
    />
  </Page>
</template>

<script>
  export default {
    data: () => ({}),

    methods: {
      async onLoaded() {

        console.log('[YooCamera] Getting Camera view')
        this.$yoo.camera.registerElement(this.$refs.yooCamera)

        console.log('[YooCamera] Getting permission')
        if (await this.$yoo.camera.requestPermission()) {
          
          console.log('[YooCamera] Permission granted, start preview')
          this.$yoo.camera.preview()
        }
      },

      doFaceDetected({ 
        x, 
        y, 
        width, 
        height,
        leftEyeOpenProbability,
        rightEyeOpenProbability,
        smilingProbability,
        headEulerAngleX,
        headEulerAngleY,
        headEulerAngleZ
      }) {
        console.log(
          '[YooCamera] doFaceDetected',
          `
          x: ${x}
          y: ${y}
          width: ${width}
          height: ${height}
          leftEyeOpenProbability: ${leftEyeOpenProbability}
          rightEyeOpenProbability: ${rightEyeOpenProbability}
          smilingProbability: ${smilingProbability}
          headEulerAngleX: ${headEulerAngleX}
          headEulerAngleY: ${headEulerAngleY}
          headEulerAngleZ: ${headEulerAngleZ}
          `
        )
        if (!x || !y || !width || !height) {
          this.imagePath = null
        }
      },

      doImageCaptured({
        type,
        count,
        total,
        image: {
          path,
          source
        },
        inferences
      }) {
        if (total === 0) {
          console.log('[YooCamera] doImageCreated', `${type}: [${count}] ${path}`)
          this.imageCreated = `${count}`
        } else {
          console.log('[YooCamera] doImageCreated', `${type}: [${count}] of [${total}] - ${path}`)
          this.imageCreated = `${count} de ${total}`
        }
        console.log('[YooCamera] Mask Pytorch', inferences)
        this.imagePath = source
      },

      doEndCapture() {
        console.log('[YooCamera] doEndCapture')
      },

      doQRCodeContent({ content }) {
        console.log('[YooCamera] doQRCodeContent', content)
      },

      doStatus({ status }) {
        console.log('[YooCamera] doStatus', status)
      },

      doPermissionDenied() {
        console.log('[YooCamera] doPermissionDenied')
      }
    }
  }
</script>

API

Props
PropsInput/FormatDefault valueDescription
lens"front" or "back""front"The camera lens to use "front" or "back".
captureType"none", "front", "frame" or "qrcode""none"The capture type of the camera.
imageCapturebooleanfalseEnable/disabled save image capture.
imageCaptureAmountnumber0The image capture amount goal.
imageCaptureIntervalnumber1000The image capture time interval in milliseconds.
imageCaptureWidth"NNpx""200px"The image capture width in pixels.
imageCaptureHeight"NNpx""200px"The image capture height in pixels.
colorEncoding"RGB" or "YUV""RGB"Only for android. The image capture color encoding type: "RGB" or "YUV".
detectionBoxbooleanfalseShow/hide the face detection box.
detectionBoxColorstring#ffffffSet detection box color.
detectionMinSize"NN%""0%"The face minimum size percentage to capture.
detectionMaxSize"NN%""100%"The face maximum size percentage to capture.
roibooleanfalseEnable/disable the region of interest capture.
roiTopOffset"NN%""0%"Distance in percentage of the top face bounding box with the top of the camera preview.
roiRightOffset"NN%""0%"Distance in percentage of the right face bounding box with the right of the camera preview.
roiBottomOffset"NN%""0%"Distance in percentage of the bottom face bounding box with the bottom of the camera preview.
roiLeftOffset"NN%""0%"Distance in percentage of the left face bounding box with the left of the camera preview.
roiAreaOffsetbooleanfalseEnable/disable display of the region of interest area offset.
roiAreaOffsetColorstring'#ffffff73'Set display of the region of interest area offset color.
faceContours (Android Only)booleanfalseEnable/disable display list of points on a detected face.
faceContoursColor (Android Only)string'#FFFFFF'Set face contours color.
computerVision (Android Only)booleanfalseEnable/disable computer vision model.
torchbooleanfalseEnable/disable device torch. Available only to camera lens "back".
Methods
FunctionParametersValid valuesReturn TypeDescription
requestPermission--promiseAsk the user to give the permission to access camera.
hasPermission--booleanReturn if application has camera permission.
preview--voidStart camera preview if has permission.
startCapturetype: string
  • "none"
  • "face"
  • "qrcode"
  • "frame"
voidSet capture type "none", "face", "qrcode" or "frame". Default value is "none".
stopCapture--voidStop any type of capture.
destroy--voidDestroy preview.
toggleLens--voidToggle camera lens facing "front"/"back".
setCameraLenslens: string"front" or "back"voidSet camera to use "front" or "back" lens. Default value is "front".
getLens--stringReturn "front" or "back".
setImageCaptureenable: booleantrue or falsevoidEnable/disabled save image capture. Default value is false.
setImageCaptureAmountamount: IntAny positive Int valuevoidFor value 0, save infinity images. When the capture image amount is reached, the event onEndCapture is triggered. Default value is 0.
setImageCaptureIntervalinterval: numberAny positive number that represent time in millisecondsvoidSet the image capture time interval in milliseconds.
setImageCaptureWidthwidth: stringValue format must be in NNpxvoidSet the image capture width in pixels.
setImageCaptureHeightheight: stringValue format must be in NNpxvoidSet the image capture height in pixels.
setImageCaptureColorEncodingcolorEncoding: string"YUV" or "RGB"voidOnly for android. Set the image capture color encoding type: "RGB" or "YUV".
setDetectionBoxenable: booleantrue or falsevoidSet to show/hide the face detection box.
setDetectionBoxColorcolor: stringhexadecimalvoidSet detection box color.
setFacePaddingPercentpercentage: stringValue format must be in NN%voidSet face image capture and detection box padding in percentage.
setDetectionMinSizepercentage: stringValue format must be in NN%voidSet the face minimum size percentage to capture.
setDetectionMaxSizepercentage: stringValue format must be in NN%voidSet the face maximum size percentage to capture.
setROIenable: booleantrue or falsevoidEnable/disable face region of interest capture.
setROITopOffsetpercentage: stringValue format must be in NN%voidDistance in percentage of the top face bounding box with the top of the camera preview.
setROIRightOffsetpercentage: stringValue format must be in NN%voidDistance in percentage of the right face bounding box with the right of the camera preview.
setROIBottomOffsetpercentage: stringValue format must be in NN%voidDistance in percentage of the bottom face bounding box with the bottom of the camera preview.
setROILeftOffsetpercentage: stringValue format must be in NN%voidDistance in percentage of the left face bounding box with the left of the camera preview.
setROIMinSizepercentage: stringValue format must be in NN%voidSet the minimum face size related within the ROI.
setROIAreaOffsetenable: booleantrue or falsevoidEnable/disable display of the region of interest area offset.
setROIAreaOffsetColorcolor: stringHexadecimal colorvoidSet display of the region of interest area offset color.
setFaceContours (Android Only)enable: booleantrue or falsevoidEnable/disable display list of points on a detected face.
setFaceContoursColor (Android Only)color: stringHexadecimal colorvoidSet face contours color.
setComputerVision (Android Only)enable: booleantrue or falsevoidEnable/disable computer vision model.
setComputerVisionLoadModels (Android Only)modelPaths: Array<string>Valid system path file to a PyTorch computer vision modelvoidSet model to be used when image is captured. To se more about it, Click Here.
computerVisionClearModels (Android Only)--voidClear models that was previous added using setComputerVisionLoadModels.
setTorchenable: booleantrue or falsevoidEnable/disable device torch. Available only to camera lens "back".
Events
EventParametersDescription
imageCaptured{ type: string, count: number, total: number, image: object = { path: string, source: any, binary: any }, inferences: [{ ['model name']: model output }] }Must have started capture type of face/frame. Emitted when the face image file saved:
  • type: "face" or "frame"
  • total: total to create
  • image.path: the face/frame image path
  • image.source: the blob file
  • image.binary: the blob file
  • inferences: An Array with models output
    faceDetected{ x: number, y: number, width: number, height: number, leftEyeOpenProbability: number, rightEyeOpenProbability: number, smilingProbability: number, headEulerAngleX: number, headEulerAngleY: number, headEulerAngleZ: number }Must have started capture type of face. Emit the face analysis, all parameters null if no more face detecting.
    endCapture-Must have started capture type of face/frame. Emitted when the number of image files created is equal of the number of images set (see the method setImageCaptureAmount).
    qrCodeContent{ content: string }Must have started capture type of qrcode (see startCapture). Emitted when the camera read a QR Code.
    status{ type: 'error'/'message', status: string }Emit message error from native. Used more often for debug purpose.
    permissionDenied-Emit when try to preview but there is not camera permission.
    Face Analysis

    The face analysis is the response send by the onFaceDetected. Here we specify all the parameters.

    AttributeTypeDescription
    xnumberThe x position of the face in the screen.
    ynumberThe y position of the face in the screen.
    widthnumberThe width position of the face in the screen.
    heightnumberThe height position of the face in the screen.
    leftEyeOpenProbabilitynumberThe left eye open probability.
    rightEyeOpenProbabilitynumberThe right eye open probability.
    smilingProbabilitynumberThe smiling probability.
    headEulerAngleXnumberThe angle in degrees that indicate the vertical head direction. See Head Movements
    headEulerAngleYnumberThe angle in degrees that indicate the horizontal head direction. See Head Movements
    headEulerAngleZnumberThe angle in degrees that indicate the tilt head direction. See Head Movements
    Head Movements

    Here we're explaining the above gif and how reached the "results". Each "movement" (vertical, horizontal and tilt) is a state, based in the angle in degrees that indicate head direction;

    Head DirectionAttributev < -36°-36° < v < -12°-12° < v < 12°12° < v < 36°36° < v
    VerticalheadEulerAngleXSuper DownDownFrontalUpSuper Up
    HorizontalheadEulerAngleYSuper LeftLeftFrontalRightSuper Right
    TiltheadEulerAngleZSuper RightRightFrontalLeftSuper Left
    Messages

    Pre-define message constants used by the status event.

    MessageDescription
    INVALID_MINIMUM_SIZEFace/QRCode width percentage in relation of the screen width is less than the set.
    INVALID_MAXIMUM_SIZEFace/QRCode width percentage in relation of the screen width is more than the set.
    INVALID_OUT_OF_ROIFace bounding box is out of the set region of interest.

    Contribute and make it better

    Clone the repo, change what you want and send PR.

    For commit messages we use Conventional Commits.

    Contributions are always welcome, some people that already contributed!


    Code with ❤ by the Cyberlabs AI Front-End Team

    Keywords

    FAQs

    Package last updated on 05 Apr 2021

    Did you know?

    Socket

    Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

    Install

    Related posts

    SocketSocket SOC 2 Logo

    Product

    • Package Alerts
    • Integrations
    • Docs
    • Pricing
    • FAQ
    • Roadmap
    • Changelog

    Packages

    npm

    Stay in touch

    Get open source security insights delivered straight into your inbox.


    • Terms
    • Privacy
    • Security

    Made with ⚡️ by Socket Inc