AS-One v2 : A Modular Library for YOLO Object Detection, Segmentation, Tracking & Pose
👋 Hello
==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==
AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack
, DeepSORT
or NorFair
can be integrated with different versions of YOLO
with minimum lines of code.
This python wrapper provides YOLO models in ONNX
, PyTorch
& CoreML
flavors. We plan to offer support for future versions of YOLO when they get released.
This is One Library for most of your computer vision needs.
If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects
Watch the step-by-step tutorial 🤝
💻 Install
🔥 Prerequisites
pip install asone
👉 Install from Source
💾 Clone the Repository
Navigate to an empty folder of your choice.
git clone https://github.com/augmentedstartups/AS-One.git
Change Directory to AS-One
cd AS-One
👉 For Linux
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
👉 For Windows 10/11
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox
pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
👉 For MacOS
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
Quick Start 🏃♂️
Use tracker on sample video.
import asone
from asone import ASOne
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw(model_output, display=False)
Run in Google Colab
💻
Sample Code Snippets 📃
6.1 👉 Object Detection
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, use_cuda=True)
vid = model.read_video('data/sample_videos/test.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True)
Run the asone/demo_detector.py
to test detector.
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
6.1.1 👉 Use Custom Trained Weights for Detector
Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True)
vid = model.read_video('data/sample_videos/license_video.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
6.1.2 👉 Changing Detector Models
Change detector by simply changing detector flag. The flags are provided in benchmark tables.
- Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
model = ASOne(detector=asone.YOLOV7_MLMODEL)
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2 👉 Object Tracking
Use tracker on sample video.
import asone
from asone import ASOne
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne
class.
6.2.1 👉 Changing Detector and Tracking Models
Change Tracker by simply changing the tracker flag.
The flags are provided in benchmark tables.
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
Run the asone/demo_tracker.py
to test detector.
# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
6.3 👉 Segmentation
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True)
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw_masks(model_output, display=True)
6.4 👉 Text Detection
Sample code to detect text on an image
import asone
from asone import ASOne, utils
import cv2
model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True)
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)
Use Tracker on Text
import asone
from asone import ASOne
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
Run the asone/demo_ocr.py
to test ocr.
# run on gpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4
# run on cpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
6.5 👉 Pose Estimation
Sample code to estimate pose on an image
import asone
from asone import PoseEstimator, utils
import cv2
model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True)
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
- Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in benchmark tables.
import asone
from asone import PoseEstimator, utils
model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True)
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
annotations = utils.draw_kpts(model_output)
Run the asone/demo_pose_estimator.py
to test Pose estimation.
# run on gpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4
# run on cpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu
To setup ASOne using Docker follow instructions given in docker setup🐳
ToDo 📝
Offered By 💼 : | Maintained By 👨💻 : |
---|
| |