Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

vision-camera-resize-plugin

Package Overview
Dependencies
Maintainers
1
Versions
14
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

vision-camera-resize-plugin

A VisionCamera Frame Processor Plugin for fast and efficient Frame resizing, cropping and pixelformat conversion

  • 0.2.0
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
2.4K
increased by41.72%
Maintainers
1
Weekly downloads
 
Created
Source

vision-camera-resize-plugin

A VisionCamera Frame Processor Plugin for fast and efficient Frame resizing, cropping and pixel-format conversion using GPU-acceleration and CPU-vector based operations.

Installation

  1. Install react-native-vision-camera and make sure Frame Processors are enabled.
  2. Install vision-camera-resize-plugin:
    yarn add vision-camera-resize-plugin
    cd ios && pod install
    

Usage

import { resize } from 'vision-camera-resize-plugin';

function App() {
  const frameProcessor = useFrameProcessor((frame) => {
    'worklet'

    const resized = resize(frame, {
      size: {
        width: 100,
        height: 100
      },
      pixelFormat: 'rgb (8-bit)'
    })
  }, [])

  return <Camera frameProcessor={frameProcessor} {...props} />
}

react-native-fast-tflite

The vision-camera-resize-plugin can be used together with react-native-fast-tflite to prepare the input tensor data.

For example, to use the efficientdet TFLite model to detect objects inside a Frame, simply add the model to your app's bundle, set up VisionCamera and react-native-fast-tflite, and resize your Frames accordingly.

From the model's description on the website, we understand that the model expects 320 x 320 x 3 buffers as input, where the format is uint8 rgb.

const objectDetection = useTensorflowModel(require('assets/efficientdet.tflite'))
const model = objectDetection.state === "loaded" ? objectDetection.model : undefined

const frameProcessor = useFrameProcessor((frame) => {
  'worklet'

  const data = resize(frame, {
    size: {
      width: 320,
      height: 320,
    },
    pixelFormat: 'rgb (8-bit)'
  })
  const output = model.runSync(data)

  const numDetections = output[0]
  console.log(`Detected ${numDetections} objects!`)
}, [model])

Benchmarks

I benchmarked vision-camera-resize-plugin on an iPhone 15 Pro, using the following code:

const start = performance.now()
const result = resize(frame, {
  size: {
    width: 100,
    height: 100,
  },
  pixelFormat: 'rgb (8-bit)',
})
const end = performance.now();

const diff = (end - start).toFixed(2)
console.log(`Resize and conversion took ${diff}ms!`)

And when running on 1080x1920 yuv Frames, I got the following results:

 LOG  Resize and conversion took 6.48ms
 LOG  Resize and conversion took 6.06ms
 LOG  Resize and conversion took 5.89ms
 LOG  Resize and conversion took 5.97ms
 LOG  Resize and conversion took 6.98ms

This means the Frame Processor can run at up to ~160 FPS.

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT


Made with create-react-native-library

Keywords

FAQs

Package last updated on 16 Jan 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc