Ultralytics YOLOv8 for SOTA object detection, multi-object tracking, instance segmentation, pose estimation and image classification.
An efficient library for image augmentation, providing extensive transformations to support machine learning and computer vision tasks.
Open Source Differentiable Computer Vision Library for PyTorch
A set of easy-to-use utils that will come in handy in any Computer Vision project
Low level implementations for computer vision in Rust
A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, displaying Matplotlib images, sorting contours, detecting edges, and much more easier with OpenCV and both Python 2.7 and Python 3.
Video scene cut/shot detection program and Python library.
QUick and DIrty Domain Adaptation
Collection of common code shared among different research projects in FAIR computer vision team
Image augmentation library for deep neural networks
A high-performance image processing library designed to optimize and extend the Albumentations library with specialized functions for advanced image transformations. Perfect for developers working in computer vision who require efficient and scalable image augmentation.
The Rerun Logging SDK
A toolkit for making real world machine learning and data analysis applications
Open Source Image and Video Super-Resolution Toolbox
GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration
OpenMMLab Detection Toolbox and Benchmark
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration
Light Weight Toolkit for Bounding Boxes
FiftyOne: the open-source tool for building high-quality datasets and computer vision models
OpenMMLab Image Classification Toolbox and Benchmark
Microsoft Azure Cognitive Services Computer Vision Client Library for Python
OpenMMLab Computer Vision Foundation
Open MMLab Semantic Segmentation Toolbox and Benchmark
OpenMMLab Unified Video Perception Platform
Mahotas: Computer Vision Library
OpenMMLab Computer Vision Foundation
Industry-strength computer Vision extensions for Keras.
Industry-strength computer Vision extensions for Keras.
Document Text Recognition (docTR): deep Learning for high-performance OCR on documents.
Provides spatial maths capability for Python
SuperGradients
Automation with Computer Vision for Python
Savant Rust core functions library
Ultralytics HUB Client SDK.
OpenMMLab Pose Estimation Toolbox and Benchmark.
Easily turn a set of image urls to an image dataset
Catalyst. Accelerated deep learning R&D with PyTorch.
With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
Computer Vision Helping Library
OpenMMLab Model Pretraining Toolbox and Benchmark
CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.
A toolkit for making real world machine learning and data analysis applications
With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.
differential geometric computer vision for deep learning
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes