Socket
Socket
Sign inDemoInstall

vision-camera-resize-plugin

Package Overview
Dependencies
522
Maintainers
1
Versions
12
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

    vision-camera-resize-plugin

A VisionCamera Frame Processor Plugin for fast and efficient Frame resizing, cropping and pixelformat conversion


Version published
Weekly downloads
248
increased by25.25%
Maintainers
1
Created
Weekly downloads
 

Readme

Source

vision-camera-resize-plugin

A VisionCamera Frame Processor Plugin for fast and efficient Frame resizing, cropping and pixel-format conversion (YUV -> RGB) using GPU-acceleration, CPU-vector based operations and ARM NEON SIMD acceleration.

Installation

  1. Install react-native-vision-camera (>= 3.8.2) and react-native-worklets-core (>= 0.2.4) and make sure Frame Processors are enabled.
  2. Install vision-camera-resize-plugin:
    yarn add vision-camera-resize-plugin
    cd ios && pod install
    

Usage

Use the resize plugin within a Frame Processor:

const { resize } = useResizePlugin()

const frameProcessor = useFrameProcessor((frame) => {
  'worklet'

  const resized = resize(frame, {
    scale: {
      width: 192,
      height: 192
    },
    pixelFormat: 'rgb',
    dataType: 'uint8'
  })

  const firstPixel = {
    r: resized[0],
    g: resized[1],
    b: resized[2]
  }
}, [])

Or outside of a function component:

const { resize } = createResizePlugin()

const frameProcessor = createFrameProcessor((frame) => {
  'worklet'

  const resized = resize(frame, {
    // ...
  })
  // ...
})

Pixel Formats

The resize plugin operates in RGB colorspace.

Name0123
rgbRGBR
rgbaRGBA
argbARGB
bgraBGRA
bgrBGRB
abgrABGR

Data Types

The resize plugin can either convert to uint8 or float32 values:

NameJS TypeValue RangeExample size
uint8Uint8Array0...2551920x1080 RGB Frame = ~6.2 MB
float32Float32Array0.0...1.01920x1080 RGB Frame = ~24.8 MB

Cropping

When scaling to a different size (e.g. 1920x1080 -> 100x100), the Resize Plugin performs a center-crop on the image before scaling it down so the resulting image matches the target aspect ratio instead of being stretched.

You can customize this by passing a custom crop parameter, e.g. instead of center-crop, use the top portion of the screen:

const resized = resize(frame, {
  scale: {
    width: 192,
    height: 192
  },
  crop: {
    y: 0,
    x: 0,
    // 1:1 aspect ratio because we scale to 192x192
    width: frame.width,
    height: frame.width
  },
  pixelFormat: 'rgb',
  dataType: 'uint8'
})

Performance

If possible, use one of these two formats:

  • argb in uint8: Can be converted the fastest, but has an additional unused alpha channel.
  • rgb in uint8: Requires one more conversion step from argb, but uses 25% less memory due to the removed alpha channel.

All other formats require additional conversion steps, and float models have additional memory overhead (4x as big).

When using TensorFlow Lite, try to convert your model to use argb-uint8 or rgb-uint8 as it's input type.

react-native-fast-tflite

The vision-camera-resize-plugin can be used together with react-native-fast-tflite to prepare the input tensor data.

For example, to use the efficientdet TFLite model to detect objects inside a Frame, simply add the model to your app's bundle, set up VisionCamera and react-native-fast-tflite, and resize your Frames accordingly.

From the model's description on the website, we understand that the model expects 320 x 320 x 3 buffers as input, where the format is uint8 rgb.

const objectDetection = useTensorflowModel(require('assets/efficientdet.tflite'))
const model = objectDetection.state === "loaded" ? objectDetection.model : undefined

const { resize } = useResizePlugin()

const frameProcessor = useFrameProcessor((frame) => {
  'worklet'

  const data = resize(frame, {
    scale: {
      width: 320,
      height: 320,
    },
    pixelFormat: 'rgb',
    dataType: 'uint8'
  })
  const output = model.runSync([data])

  const numDetections = output[0]
  console.log(`Detected ${numDetections} objects!`)
}, [model])

Benchmarks

I benchmarked vision-camera-resize-plugin on an iPhone 15 Pro, using the following code:

const start = performance.now()
const result = resize(frame, {
  scale: {
    width: 100,
    height: 100,
  },
  pixelFormat: 'rgb',
  dataType: 'uint8'
})
const end = performance.now()

const diff = (end - start).toFixed(2)
console.log(`Resize and conversion took ${diff}ms!`)

And when running on 1080x1920 yuv Frames, I got the following results:

 LOG  Resize and conversion took 6.48ms
 LOG  Resize and conversion took 6.06ms
 LOG  Resize and conversion took 5.89ms
 LOG  Resize and conversion took 5.97ms
 LOG  Resize and conversion took 6.98ms

This means the Frame Processor can run at up to ~160 FPS.

Adopting at scale

This library helped you? Consider sponsoring!

This library is provided as is, I work on it in my free time.

If you're integrating vision-camera-resize-plugin in a production app, consider funding this project and contact me to receive premium enterprise support, help with issues, prioritize bugfixes, request features, help at integrating vision-camera-resize-plugin and/or VisionCamera Frame Processors, and more.

Contributing

See the contributing guide to learn how to contribute to the repository and the development workflow.

License

MIT


Made with create-react-native-library

Keywords

FAQs

Last updated on 14 Mar 2024

Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc