Socket
Socket
Sign inDemoInstall

ffcv

Package Overview
Dependencies
0
Maintainers
2
Alerts
File Explorer

Install Socket

Detect and block malicious and high-risk dependencies

Install

    ffcv

FFCV: Fast Forward Computer Vision


Maintainers
2

Readme

Fast Forward Computer Vision: train models at a fraction of the cost with accelerated data loading!

[install] [quickstart] [features] [docs] [support slack] [homepage]
Maintainers: Guillaume Leclerc, Andrew Ilyas and Logan Engstrom

ffcv is a drop-in data loading system that dramatically increases data throughput in model training:

  • Train an ImageNet model on one GPU in 35 minutes (98¢/model on AWS)
  • Train a CIFAR-10 model on one GPU in 36 seconds (2¢/model on AWS)
  • Train a $YOUR_DATASET model $REALLY_FAST (for $WAY_LESS)

Keep your training algorithm the same, just replace the data loader! Look at these speedups:

ffcv also comes prepacked with fast, simple code for standard vision benchmarks:

Installation

Linux

conda create -y -n ffcv python=3.9 cupy pkg-config compilers libjpeg-turbo opencv pytorch torchvision cudatoolkit=11.3 numba -c pytorch -c conda-forge
conda activate ffcv
pip install ffcv

Troubleshooting note: if the above commands result in a package conflict error, try running conda config --env --set channel_priority flexible in the environment and rerunning the installation command.

Windows

  • Install opencv4
    • Add ..../opencv/build/x64/vc15/bin to PATH environment variable
  • Install libjpeg-turbo, download libjpeg-turbo-x.x.x-vc64.exe, not gcc64
    • Add ..../libjpeg-turbo64/bin to PATH environment variable
  • Install pthread, download last release.zip
    • After unzip, rename Pre-build.2 folder to pthread
    • Open pthread/include/pthread.h, and add the code below to the top of the file.
    #define HAVE_STRUCT_TIMESPEC
    
    • Add ..../pthread/dll to PATH environment variable
  • Install cupy depending on your CUDA Toolkit version.
  • pip install ffcv

Citation

If you use FFCV, please cite it as:

@misc{leclerc2022ffcv,
    author = {Guillaume Leclerc and Andrew Ilyas and Logan Engstrom and Sung Min Park and Hadi Salman and Aleksander Madry},
    title = {{FFCV}: Accelerating Training by Removing Data Bottlenecks},
    year = {2022},
    howpublished = {\url{https://github.com/libffcv/ffcv/}},
    note = {commit xxxxxxx}
}

(Make sure to replace xxxxxxx above with the hash of the commit used!)

Quickstart

Accelerate any learning system with ffcv. First, convert your dataset into ffcv format (ffcv converts both indexed PyTorch datasets and WebDatasets):

from ffcv.writer import DatasetWriter
from ffcv.fields import RGBImageField, IntField

# Your dataset (`torch.utils.data.Dataset`) of (image, label) pairs
my_dataset = make_my_dataset()
write_path = '/output/path/for/converted/ds.beton'

# Pass a type for each data field
writer = DatasetWriter(write_path, {
    # Tune options to optimize dataset size, throughput at train-time
    'image': RGBImageField(max_resolution=256, jpeg_quality=jpeg_quality),
    'label': IntField()
})

# Write dataset
writer.from_indexed_dataset(my_dataset)

Then replace your old loader with the ffcv loader at train time (in PyTorch, no other changes required!):

from ffcv.loader import Loader, OrderOption
from ffcv.transforms import ToTensor, ToDevice, ToTorchImage, Cutout
from ffcv.fields.decoders import IntDecoder, RandomResizedCropRGBImageDecoder

# Random resized crop
decoder = RandomResizedCropRGBImageDecoder((224, 224))

# Data decoding and augmentation
image_pipeline = [decoder, Cutout(), ToTensor(), ToTorchImage(), ToDevice(0)]
label_pipeline = [IntDecoder(), ToTensor(), ToDevice(0)]

# Pipeline for each data field
pipelines = {
    'image': image_pipeline,
    'label': label_pipeline
}

# Replaces PyTorch data loader (`torch.utils.data.Dataloader`)
loader = Loader(write_path, batch_size=bs, num_workers=num_workers,
                order=OrderOption.RANDOM, pipelines=pipelines)

# rest of training / validation proceeds identically
for epoch in range(epochs):
    ...

See here for a more detailed guide to deploying ffcv for your dataset.

Prepackaged Computer Vision Benchmarks

From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.

ImageNet

We provide a self-contained script for training ImageNet fast. Above we plot the training time versus accuracy frontier, and the dataloading speeds, for 1-GPU ResNet-18 and 8-GPU ResNet-50 alongside a few baselines.

Link to Configtop_1top_5# EpochsTime (mins)ArchitectureSetup
Link0.7840.9418877.2ResNet-508 x A100
Link0.7800.9375649.4ResNet-508 x A100
Link0.7720.9324035.6ResNet-508 x A100
Link0.7660.9273228.7ResNet-508 x A100
Link0.7560.9212421.7ResNet-508 x A100
Link0.7380.9081614.9ResNet-508 x A100
Link0.7240.90388187.3ResNet-181 x A100
Link0.7130.89956119.4ResNet-181 x A100
Link0.7060.8944085.5ResNet-181 x A100
Link0.7000.8893268.9ResNet-181 x A100
Link0.6880.8812451.6ResNet-181 x A100
Link0.6690.8681635.0ResNet-181 x A100

Train your own ImageNet models! You can use our training script and premade configurations to train any model seen on the above graphs.

CIFAR-10

We also include premade code for efficient training on CIFAR-10 in the examples/ directory, obtaining 93% top1 accuracy in 36 seconds on a single A100 GPU (without optimizations such as MixUp, Ghost BatchNorm, etc. which have the potential to raise the accuracy even further). You can find the training script here.

Features

Computer vision or not, FFCV can help make training faster in a variety of resource-constrained settings! Our performance guide has a more detailed account of the ways in which FFCV can adapt to different performance bottlenecks.

  • Plug-and-play with any existing training code: Rather than changing aspects of model training itself, FFCV focuses on removing data bottlenecks, which turn out to be a problem everywhere from neural network training to linear regression. This means that:

    • FFCV can be introduced into any existing training code in just a few lines of code (e.g., just swapping out the data loader and optionally the augmentation pipeline);
    • You don't have to change the model itself to make it faster (e.g., feel free to analyze models without CutMix, Dropout, momentum scheduling, etc.);
    • FFCV can speed up a lot more beyond just neural network training---in fact, the more data-bottlenecked the application (e.g., linear regression, bulk inference, etc.), the faster FFCV will make it!

    See our Getting started guide, Example walkthroughs, and Code examples to see how easy it is to get started!

  • Fast data processing without the pain: FFCV automatically handles data reading, pre-fetching, caching, and transfer between devices in an extremely efficiently way, so that users don't have to think about it.

  • Automatically fused-and-compiled data processing: By either using pre-written FFCV transformations or easily writing custom ones, users can take advantage of FFCV's compilation and pipelining abilities, which will automatically fuse and compile simple Python augmentations to machine code using Numba, and schedule them asynchronously to avoid loading delays.

  • Load data fast from RAM, SSD, or networked disk: FFCV exposes user-friendly options that can be adjusted based on the resources available. For example, if a dataset fits into memory, FFCV can cache it at the OS level and ensure that multiple concurrent processes all get fast data access. Otherwise, FFCV can use fast process-level caching and will optimize data loading to minimize the underlying number of disk reads. See The Bottleneck Doctor guide for more information.

  • Training multiple models per GPU: Thanks to fully asynchronous thread-based data loading, you can now interleave training multiple models on the same GPU efficiently, without any data-loading overhead. See this guide for more info.

  • Dedicated tools for image handling: All the features above work are equally applicable to all sorts of machine learning models, but FFCV also offers some vision-specific features, such as fast JPEG encoding and decoding, storing datasets as mixtures of raw and compressed images to trade off I/O overhead and compute overhead, etc. See the Working with images guide for more information.

Contributors

  • Guillaume Leclerc
  • Logan Engstrom
  • Andrew Ilyas
  • Sam Park
  • Hadi Salman

FAQs


Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc