
Security News
vlt Launches "reproduce": A New Tool Challenging the Limits of Package Provenance
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
==UPDATE: ASOne v2 is now out! We've updated with YOLOV9 and SAM==
AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as ByteTrack
, DeepSORT
or NorFair
can be integrated with different versions of YOLO
with minimum lines of code.
This python wrapper provides YOLO models in ONNX
, PyTorch
& CoreML
flavors. We plan to offer support for future versions of YOLO when they get released.
This is One Library for most of your computer vision needs.
If you would like to dive deeper into YOLO Object Detection and Tracking, then check out our courses and projects
Watch the step-by-step tutorial 🤝
GPU
drivers in your system if you want to use GPU
. Follow driver installation for further instructions.pip install asone
Navigate to an empty folder of your choice.
git clone https://github.com/augmentedstartups/AS-One.git
Change Directory to AS-One
cd AS-One
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox
pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# for CPU
pip install torch torchvision
Use tracker on sample video.
import asone
from asone import ASOne
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw(model_output, display=False)
Google Colab
💻import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/test.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True)
Run the asone/demo_detector.py
to test detector.
# run on gpu
python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
Use your custom weights of a detector model trained on custom data by simply providing path of the weights file.
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, weights='data/custom_weights/yolov7_custom.pt', use_cuda=True) # Set use_cuda to False for cpu
vid = model.read_video('data/sample_videos/license_video.mp4')
for img in vid:
detection = model.detecter(img)
annotations = ASOne.draw(detection, img=img, display=True, class_names=['license_plate'])
Change detector by simply changing detector flag. The flags are provided in benchmark tables.
# Change detector
model = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
# For macOs
# YOLO5
model = ASOne(detector=asone.YOLOV5X_MLMODEL)
# YOLO7
model = ASOne(detector=asone.YOLOV7_MLMODEL)
# YOLO8
model = ASOne(detector=asone.YOLOV8L_MLMODEL)
Use tracker on sample video.
import asone
from asone import ASOne
# Instantiate Asone object
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/test.mp4', filter_classes=['car'])
# Loop over track to retrieve outputs of each frame
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
# Do anything with bboxes here
[Note] Use can use custom weights for a detector model by simply providing path of the weights file. in ASOne
class.
Change Tracker by simply changing the tracker flag.
The flags are provided in benchmark tables.
model = ASOne(tracker=asone.BYTETRACK, detector=asone.YOLOV9_C, use_cuda=True)
# Change tracker
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV9_C, use_cuda=True)
# Change Detector
model = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
Run the asone/demo_tracker.py
to test detector.
# run on gpu
python -m asone.demo_tracker data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_tracker data/sample_videos/test.mp4 --cpu
import asone
from asone import ASOne
model = ASOne(detector=asone.YOLOV9_C, segmentor=asone.SAM, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_detecter('data/sample_videos/test.mp4', filter_classes=['car'])
for model_output in tracks:
annotations = ASOne.draw_masks(model_output, display=True) # Draw masks
Sample code to detect text on an image
# Detect and recognize text
import asone
from asone import ASOne, utils
import cv2
model = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread('data/sample_imgs/sample_text.jpeg')
results = model.detect_text(img)
annotations = utils.draw_text(img, results, display=True)
Use Tracker on Text
import asone
from asone import ASOne
# Instantiate Asone object
model = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) #set use_cuda=False to use cpu
tracks = model.video_tracker('data/sample_videos/GTA_5-Unique_License_Plate.mp4')
# Loop over track to retrieve outputs of each frame
for model_output in tracks:
annotations = ASOne.draw(model_output, display=True)
# Do anything with bboxes here
Run the asone/demo_ocr.py
to test ocr.
# run on gpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4
# run on cpu
python -m asone.demo_ocr data/sample_videos/GTA_5-Unique_License_Plate.mp4 --cpu
Sample code to estimate pose on an image
# Pose Estimation
import asone
from asone import PoseEstimator, utils
import cv2
model = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread('data/sample_imgs/test2.jpg')
kpts = model.estimate_image(img)
annotations = utils.draw_kpts(kpts, image=img, display=True)
# Pose Estimation on video
import asone
from asone import PoseEstimator, utils
model = PoseEstimator(estimator_flag=asone.YOLOV7_W6_POSE, use_cuda=True) #set use_cuda=False to use cpu
estimator = model.video_estimator('data/sample_videos/football1.mp4')
for model_output in estimator:
annotations = utils.draw_kpts(model_output)
# Do anything with kpts here
Run the asone/demo_pose_estimator.py
to test Pose estimation.
# run on gpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4
# run on cpu
python -m asone.demo_pose_estimator data/sample_videos/football1.mp4 --cpu
To setup ASOne using Docker follow instructions given in docker setup🐳
Offered By 💼 : | Maintained By 👨💻 : |
---|---|
![]() | ![]() |
FAQs
Unknown package
We found that asone demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Research
Security News
Socket researchers uncovered a malicious PyPI package exploiting Deezer’s API to enable coordinated music piracy through API abuse and C2 server control.
Research
The Socket Research Team discovered a malicious npm package, '@ton-wallet/create', stealing cryptocurrency wallet keys from developers and users in the TON ecosystem.