
Security Fundamentals
Turtles, Clams, and Cyber Threat Actors: Shell Usage
The Socket Threat Research Team uncovers how threat actors weaponize shell techniques across npm, PyPI, and Go ecosystems to maintain persistence and exfiltrate data.
AniML comprises a variety of machine learning tools for analyzing ecological data. This Python package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos. This package is also available in R: animl
Table of Contents
It is recommended that you set up a conda environment for using animl. See Dependencies below for more detail. You will have to activate the conda environment first each time you want to run AniML from a new terminal.
git clone https://github.com/conservationtechlab/animl-py.git
cd animl-py
pip install -e .
pip install animl
We recommend running AniML on GPU-enabled hardware. **If using an NVIDIA GPU, ensure driviers, cuda-toolkit and cudnn are installed. PyTorch will install these automatically if using a conda environment. The /models/ and /utils/ modules are from the YOLOv5 repository. https://github.com/ultralytics/yolov5
Python <= 3.9
PyTorch
Animl currently depends on torch <= 2.5.0.
To enable GPU, install the CUDA-enabled version
Python Package Dependencies
We recommend you download the examples folder within this repository. Download and unarchive the zip folder. Then with the conda environment active:
python -m animl /path/to/example/folder
This should create an Animl-Directory subfolder within the example folder.
Or, if using your own data/models, animl can be given the paths to those files: Download and unarchive the zip folder. Then with the conda environment active:
python -m animl /example/folder --detector /path/to/megadetector --classifier /path/to/classifier --classlist /path/to/classlist.txt
You can use animl in this fashion on any image directory.
Finally you can use the animl.yml config file to specify parameters:
python -m animl /path/to/animl.yml
The functionality of animl can be parcelated into its individual functions to suit your data and scripting needs. The sandbox.ipynb notebook has all of these steps available for further exploration.
from animl import file_management
workingdir = file_management.WorkingDirectory('/path/to/save/data')
files = file_management.build_file_manifest('/path/to/images', out_file=workingdir.filemanifest, exif=True)
from animl import video_processing
allframes = video_processing.extract_frames(files, out_dir=workingdir.vidfdir, out_file=workingdir.imageframes,
parallel=True, frames=3, fps=None)
from animl import detect, megadetector
detector = megadetector.MegaDetector('/path/to/mdmodel.pt', device='cuda:0')
mdresults = detect.detect_MD_batch(detector, allframes, file_col="Frame", checkpoint_path=working_dir.mdraw, quiet=True)
detections = detect.parse_MD(mdresults, manifest=all_frames, out_file=workingdir.detections)
from animl import split
animals = split.get_animals(detections)
empty = split.get_empty(detections)
from animl import classifiers, inference
classifier, class_list = classifiers.load_model('/path/to/model', '/path/to/classlist.txt', device='cuda:0')
animals = inference.predict_species(animals, classifier, class_list, file_col="Frame",
batch_size=4, out_file=working_dir.predictions)
manifest = pd.concat([animals if not animals.empty else None, empty if not empty.empty else None]).reset_index(drop=True)
from animl import timelapse, animl_results_to_md_results
csv_loc = timelapse.csv_converter(animals, empty, imagedir, only_animl = True)
animl_results_to_md_results.animl_results_to_md_results(csv_loc, imagedir + "final_result.json")
manifest = link.sort_species(manifest, working_dir.linkdir)
file_management.save_data(manifest, working_dir.results)
Training workflows are still under development. Please submit Issues as you come upon them.
from animl import split
train, val, test, stats = split.train_val_test(manifest, out_dir='path/to/save/data/', label_col="species",
percentage=(0.7, 0.2, 0.1), seed=None)
seed: 28 # random number generator seed (long integer value)
device: cuda:0 # set to local gpu device
num_workers: 8 # number of cores
# dataset parameters
num_classes: 53 # might need to be adjusted based on the classes file
training_set: "/path/to/save/train_data.csv"
validate_set: "/path/to/save/validate_data.csv"
test_set: "/path/to/save/test_data.csv"
class_file: "/home/usr/machinelearning/Models/Animl-Test/test_classes.txt"
# training hyperparameters
architecture: "efficientnet_v2_m" # or choose "convnext_base"
image_size: [299, 299]
batch_size: 16
num_epochs: 100
checkpoint_frequency: 10
patience: 10 # remove from config file to disable
learning_rate: 0.003
weight_decay: 0.001
# overwrite .pt files
overwrite: False
experiment_folder: '/home/usr/machinelearning/Models/Animl-Test/'
# model to test
active_model: '/home/usr/machinelearning/Models/Animl-Test/best.pt'
class_file refers to a flle that contains index,label pairs. For example:
test_class.txt
id,class,Species,Common
1,cat, Felis catus, domestic cat
2,dog, Canis familiaris, domestic dog
(Optional) Update train.py to include MLOPS connection.
Using the config file, begin training
python -m animl.train --config /path/to/config.yaml
Every 10 epochs (or define custom 'checkpoint_frequency'), the model will be checkpointed to the 'experiment_folder' parameter in the config file, and will contain performance metrics for selection.
python -m animl.test --config /path/to/config.yaml
The Conservation Technology Lab has several models available for use.
FAQs
Tools for classifying camera trap images
We found that animl demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security Fundamentals
The Socket Threat Research Team uncovers how threat actors weaponize shell techniques across npm, PyPI, and Go ecosystems to maintain persistence and exfiltrate data.
Security News
At VulnCon 2025, NIST scrapped its NVD consortium plans, admitted it can't keep up with CVEs, and outlined automation efforts amid a mounting backlog.
Product
We redesigned our GitHub PR comments to deliver clear, actionable security insights without adding noise to your workflow.