
Research
Security News
The Growing Risk of Malicious Browser Extensions
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Easiest way of fine-tuning HuggingFace video classification models.
video-transformers
uses:
π€ accelerate for distributed training,
π€ evaluate for evaluation,
pytorchvideo for dataloading
and supports:
creating and fine-tunining video models using transformers and timm vision models
experiment tracking with neptune, tensorboard and other trackers
exporting fine-tuned models in ONNX format
pushing fine-tuned models into HuggingFace Hub
loading pretrained models from HuggingFace Hub
Automated Gradio app, and space creation
Pytorch
:conda install pytorch=1.11.0 torchvision=0.12.0 cudatoolkit=11.3 -c pytorch
pip install git+https://github.com/facebookresearch/pytorchvideo.git
pip install git+https://github.com/huggingface/transformers.git
video-transformers
:pip install video-transformers
train_root
label_1
video_1
video_2
...
label_2
video_1
video_2
...
...
val_root
label_1
video_1
video_2
...
label_2
video_1
video_2
...
...
from torch.optim import AdamW
from video_transformers import VideoModel
from video_transformers.backbones.transformers import TransformersBackbone
from video_transformers.data import VideoDataModule
from video_transformers.heads import LinearHead
from video_transformers.trainer import trainer_factory
from video_transformers.utils.file import download_ucf6
backbone = TransformersBackbone("facebook/timesformer-base-finetuned-k400", num_unfrozen_stages=1)
download_ucf6("./")
datamodule = VideoDataModule(
train_root="ucf6/train",
val_root="ucf6/val",
batch_size=4,
num_workers=4,
num_timesteps=8,
preprocess_input_size=224,
preprocess_clip_duration=1,
preprocess_means=backbone.mean,
preprocess_stds=backbone.std,
preprocess_min_short_side=256,
preprocess_max_short_side=320,
preprocess_horizontal_flip_p=0.5,
)
head = LinearHead(hidden_size=backbone.num_features, num_classes=datamodule.num_classes)
model = VideoModel(backbone, head)
optimizer = AdamW(model.parameters(), lr=1e-4)
Trainer = trainer_factory("single_label_classification")
trainer = Trainer(datamodule, model, optimizer=optimizer, max_epochs=8)
trainer.fit()
from torch.optim import AdamW
from video_transformers import TimeDistributed, VideoModel
from video_transformers.backbones.transformers import TransformersBackbone
from video_transformers.data import VideoDataModule
from video_transformers.heads import LinearHead
from video_transformers.necks import TransformerNeck
from video_transformers.trainer import trainer_factory
from video_transformers.utils.file import download_ucf6
backbone = TimeDistributed(TransformersBackbone("facebook/convnext-small-224", num_unfrozen_stages=1))
neck = TransformerNeck(
num_features=backbone.num_features,
num_timesteps=8,
transformer_enc_num_heads=4,
transformer_enc_num_layers=2,
dropout_p=0.1,
)
download_ucf6("./")
datamodule = VideoDataModule(
train_root="ucf6/train",
val_root="ucf6/val",
batch_size=4,
num_workers=4,
num_timesteps=8,
preprocess_input_size=224,
preprocess_clip_duration=1,
preprocess_means=backbone.mean,
preprocess_stds=backbone.std,
preprocess_min_short_side=256,
preprocess_max_short_side=320,
preprocess_horizontal_flip_p=0.5,
)
head = LinearHead(hidden_size=neck.num_features, num_classes=datamodule.num_classes)
model = VideoModel(backbone, head, neck)
optimizer = AdamW(model.parameters(), lr=1e-4)
Trainer = trainer_factory("single_label_classification")
trainer = Trainer(
datamodule,
model,
optimizer=optimizer,
max_epochs=8
)
trainer.fit()
from video_transformers import TimeDistributed, VideoModel
from video_transformers.backbones.transformers import TransformersBackbone
from video_transformers.data import VideoDataModule
from video_transformers.heads import LinearHead
from video_transformers.necks import GRUNeck
from video_transformers.trainer import trainer_factory
from video_transformers.utils.file import download_ucf6
backbone = TimeDistributed(TransformersBackbone("microsoft/resnet-18", num_unfrozen_stages=1))
neck = GRUNeck(num_features=backbone.num_features, hidden_size=128, num_layers=2, return_last=True)
download_ucf6("./")
datamodule = VideoDataModule(
train_root="ucf6/train",
val_root="ucf6/val",
batch_size=4,
num_workers=4,
num_timesteps=8,
preprocess_input_size=224,
preprocess_clip_duration=1,
preprocess_means=backbone.mean,
preprocess_stds=backbone.std,
preprocess_min_short_side=256,
preprocess_max_short_side=320,
preprocess_horizontal_flip_p=0.5,
)
head = LinearHead(hidden_size=neck.hidden_size, num_classes=datamodule.num_classes)
model = VideoModel(backbone, head, neck)
Trainer = trainer_factory("single_label_classification")
trainer = Trainer(
datamodule,
model,
max_epochs=8
)
trainer.fit()
from video_transformers import VideoModel
model = VideoModel.from_pretrained(model_name_or_path)
model.predict(video_or_folder_path="video.mp4")
>> [{'filename': "video.mp4", 'predictions': {'class1': 0.98, 'class2': 0.02}}]
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.push_to_hub('model_name')
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.from_pretrained('account_name/model_name')
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.push_to_hub('account_name/app_name')
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.push_to_space('account_name/app_name')
Tensorboard tracker is enabled by default.
To add Neptune/Layer ... tracking:
from video_transformers.tracking import NeptuneTracker
from accelerate.tracking import WandBTracker
trackers = [
NeptuneTracker(EXPERIMENT_NAME, api_token=NEPTUNE_API_TOKEN, project=NEPTUNE_PROJECT),
WandBTracker(project_name=WANDB_PROJECT)
]
trainer = Trainer(
datamodule,
model,
trackers=trackers
)
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.to_onnx(quantize=False, opset_version=12, export_dir="runs/exports/", export_filename="model.onnx")
from video_transformers import VideoModel
model = VideoModel.from_pretrained("runs/exp/checkpoint")
model.to_gradio(examples=['video.mp4'], export_dir="runs/exports/", export_filename="app.py")
Before opening a PR:
pip install -e ."[dev]"
python -m tests.run_code_style format
FAQs
Easiest way of fine-tuning HuggingFace video classification models.
We found that video-transformers demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Research
Security News
An in-depth analysis of credential stealers, crypto drainers, cryptojackers, and clipboard hijackers abusing open source package registries to compromise Web3 development environments.
Security News
pnpm 10.12.1 introduces a global virtual store for faster installs and new options for managing dependencies with version catalogs.