Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

opencv-transforms

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

opencv-transforms

A drop-in replacement for Torchvision Transforms using OpenCV

  • 0.0.6
  • PyPI
  • Socket score

Maintainers
1

opencv_transforms

This repository is intended as a faster drop-in replacement for Pytorch's Torchvision augmentations. This repo uses OpenCV for fast image augmentation for PyTorch computer vision pipelines. I wrote this code because the Pillow-based Torchvision transforms was starving my GPU due to slow image augmentation.

Requirements

  • A working installation of OpenCV. Tested with OpenCV version 3.4.1, 4.1.0
  • Tested on Windows 10 and Ubuntu 18.04. There is evidence that OpenCV doesn't work well with multithreading on Linux / MacOS, for example num_workers >0 in a pytorch DataLoader. I haven't run into this issue yet.

Installation

opencv_transforms is now a pip package! Simply use

  • pip install opencv_transforms

Usage

Breaking change! Please note the import syntax!

  • from opencv_transforms import transforms
  • From here, almost everything should work exactly as the original transforms.
Example: Image resizing
import numpy as np
image = np.random.randint(low=0, high=255, size=(1024, 2048, 3))
resize = transforms.Resize(size=(256,256))
image = resize(image)

Should be 1.5 to 10 times faster than PIL. See benchmarks

Performance

  • Most transformations are between 1.5X and ~4X faster in OpenCV. Large image resizes are up to 10 times faster in OpenCV.
  • To reproduce the following benchmarks, download the Cityscapes dataset.
  • An example benchmarking file can be found in the notebook bencharming_v2.ipynb I wrapped the Cityscapes default directories with a HDF5 file for even faster reading.

resize random crop change brightness change brightness and contrast change contrast only random horizontal flips

The changes start to add up when you compose multiple transformations together. composed transformations

TODO

  • Initial commit with all currently implemented torchvision transforms
  • Cityscapes benchmarks
  • Make the resample flag on RandomRotation, RandomAffine actually do something
  • Speed up augmentation in saturation and hue. Currently, fastest way is to convert to a PIL image, perform same augmentation as Torchvision, then convert back to np.ndarray

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc