New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

vision-camera-dynamsoft-label-recognizer

Package Overview
Dependencies
Maintainers
1
Versions
21
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

vision-camera-dynamsoft-label-recognizer

React Native Vision Camera Frame Processor Plugin of Dynamsoft Label Recognizer

  • 0.5.1
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
25
decreased by-13.79%
Maintainers
1
Weekly downloads
 
Created
Source

vision-camera-dynamsoft-label-recognizer

React Native Vision Camera Frame Processor Plugin of Dynamsoft Label Recognizer

Demo video

Installation

npm install vision-camera-dynamsoft-label-recognizer

make sure you correctly setup react-native-reanimated and add this to your babel.config.js

[
  'react-native-reanimated/plugin',
  {
    globals: ['__recognize'],
  },
]

Proguard Rules for Android

-keep class androidx.camera.core.** {*;}

Usage

  1. Live scanning using React Native Vision Camera.

    import * as React from 'react';
    import { StyleSheet } from 'react-native';
    import { Camera, useCameraDevices, useFrameProcessor } from 'react-native-vision-camera';
    import { recognize, ScanConfig } from 'vision-camera-dynamsoft-label-recognizer';
    import * as DLR from 'vision-camera-dynamsoft-label-recognizer';
    import * as REA from 'react-native-reanimated';
    
    export default function App() {
      const [hasPermission, setHasPermission] = React.useState(false);
      const devices = useCameraDevices();
      const device = devices.back;
    
      React.useEffect(() => {
        (async () => {
          const status = await Camera.requestCameraPermission();
          setHasPermission(status === 'authorized');
          const result = await DLR.initLicense("<license>"); //apply for a 30-day trial license here: https://www.dynamsoft.com/customer/license/trialLicense/?product=dlr
          if (result === false) {
            Alert.alert("Error","License invalid");
          }
        })();
      }, []);
    
      const frameProcessor = useFrameProcessor((frame) => {
        'worklet'
        const config:ScanConfig = {};
        const result = recognize(frame,config);
      }, [])
    
    
    
      return (
        device != null &&
        hasPermission && (
          <>
            <Camera
              style={StyleSheet.absoluteFill}
              device={device}
              isActive={true}
              frameProcessor={frameProcessor}
              frameProcessorFps={1}
            />
          </>
        )
      );
    }
    
    const styles = StyleSheet.create({
      container: {
        flex: 1,
        alignItems: 'center',
        justifyContent: 'center',
      },
    });
    
  2. Recognizing text from static images.

    import * as DLR from "vision-camera-dynamsoft-label-recognizer";
    const result = await DLR.decodeBase64(base64);
    

Interfaces

Scanning configuration:

//the value is in percentage
export interface ScanRegion{
  left: number;
  top: number;
  width: number;
  height: number;
}

export interface ScanConfig{
  scanRegion?: ScanRegion;
  includeImageBase64?: boolean;
}

export interface CustomModelConfig {
  customModelFolder: string;
  customModelFileNames: string[];
}

You can use a custom model like a model for MRZ passport reading using the CustomModelConfig prop and update the template. You can find the MRZ model and template in the example.

You need to put the model folder in the assets folder for Android or the root for iOS.

About the result:

export interface ScanResult {
  results: DLRResult[];
  imageBase64?: string;
}

export interface DLRResult {
  referenceRegionName: string;
  textAreaName: string;
  pageNumber: number;
  location: Quadrilateral;
  lineResults: DLRLineResult[];
}

export interface Quadrilateral{
  points:Point[];
}

export interface Point {
  x:number;
  y:number;
}

export interface DLRLineResult {
  text: string;
  confidence: number;
  characterModelName: string;
  characterResults: DLRCharacherResult[];
  lineSpecificationName: string;
  location: Quadrilateral;
}

export interface DLRCharacherResult {
  characterH: string;
  characterM: string;
  characterL: string;
  characterHConfidence: number;
  characterMConfidence: number;
  characterLConfidence: number;
  location: Quadrilateral;
}

Supported Platforms

  • Android
  • iOS

Detailed Installation Guide

Let's create a new react native project and use the plugin.

  1. Create a new project: npx react-native init MyTestApp
  2. Install required packages: npm install vision-camera-dynamsoft-label-recognizer react-native-reanimated react-native-vision-camera. Update relevant files following the react-native-reanimated installation guide. You can use jsc instead of hermes
  3. Update the babel.config.js file
  4. Add camera permission for both Android and iOS
  5. Update App.tsx to use the camera and the plugin
  6. For Android, register the plugin in MainApplication.java following the guide
  7. Run the project: npx react-native run-andoid/run-ios

You can check out the example for more details.

Blogs on How the Plugin is Made

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT


Made with create-react-native-library

Keywords

FAQs

Package last updated on 28 Aug 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc