
Security News
Maven Central Adds Sigstore Signature Validation
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k_ft_in1k
- 86.7% top-1vit_so150m_patch16_reg4_gap_384.sbb_e250_in12k_ft_in1k
- 87.4% top-1vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k
bfloat16
or float16
wandb
project name arg added by https://github.com/caojiaolong, use arg.experiment for nametorch.utils.checkpoint.checkpoint()
wrapper in timm.models
that defaults use_reentrant=False
, unless TIMM_REENTRANT_CKPT=1
is set in env.convnext_nano
384x384 ImageNet-12k pretrain & fine-tune. https://huggingface.co/models?search=convnext_nano%20r384vit_large_patch14_clip_224.dfn2b_s39b
RmsNorm
layer & fn to match standard formulation, use PT 2.5 impl when possible. Move old impl to SimpleNorm
layer, it's LN w/o centering or bias. There were only two timm
models using it, and they have been updated.cache_dir
arg for model creationtrust_remote_code
for HF datasets wrapperinception_next_atto
model added by creatorhf-hub:
based loading, and thus will work with new Transformers TimmWrapperModel
list_optimizers
, get_optimizer_class
, get_optimizer_info
to reworked create_optimizer_v2
fn to explore optimizers, get info or classoptim.optim_factory
, move fns to optim/_optim_factory.py
and optim/_param_groups.py
and encourage import via timm.optim
Add a set of new very well trained ResNet & ResNet-V2 18/34 (basic block) weights. See https://huggingface.co/blog/rwightman/resnet-trick-or-treat
timm.models.registry
, increased priority of existing deprecation warnings to be visibletimm
as vit_intern300m_patch14_448
model | top1 | top5 | param_count | img_size |
---|---|---|---|---|
vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k | 87.438 | 98.256 | 64.11 | 384 |
vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k | 86.608 | 97.934 | 64.11 | 256 |
vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k | 86.594 | 98.02 | 60.4 | 384 |
vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k | 85.734 | 97.61 | 60.4 | 256 |
model | top1 | top5 | param_count | img_size |
---|---|---|---|---|
resnet50d.ra4_e3600_r224_in1k | 81.838 | 95.922 | 25.58 | 288 |
efficientnet_b1.ra4_e3600_r240_in1k | 81.440 | 95.700 | 7.79 | 288 |
resnet50d.ra4_e3600_r224_in1k | 80.952 | 95.384 | 25.58 | 224 |
efficientnet_b1.ra4_e3600_r240_in1k | 80.406 | 95.152 | 7.79 | 240 |
mobilenetv1_125.ra4_e3600_r224_in1k | 77.600 | 93.804 | 6.27 | 256 |
mobilenetv1_125.ra4_e3600_r224_in1k | 76.924 | 93.234 | 6.27 | 224 |
model | top1 | top5 | param_count |
---|---|---|---|
hiera_small_abswin_256.sbb2_e200_in12k_ft_in1k | 84.912 | 97.260 | 35.01 |
hiera_small_abswin_256.sbb2_pd_e200_in12k_ft_in1k | 84.560 | 97.106 | 35.01 |
mobilenet_edgetpu_v2_m
weights w/ ra4
mnv4-small based recipe. 80.1% top-1 @ 224 and 80.7 @ 256.model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k | 84.99 | 15.01 | 97.294 | 2.706 | 32.59 | 544 |
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k | 84.772 | 15.228 | 97.344 | 2.656 | 32.59 | 480 |
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k | 84.64 | 15.36 | 97.114 | 2.886 | 32.59 | 448 |
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k | 84.314 | 15.686 | 97.102 | 2.898 | 32.59 | 384 |
mobilenetv4_conv_aa_large.e600_r384_in1k | 83.824 | 16.176 | 96.734 | 3.266 | 32.59 | 480 |
mobilenetv4_conv_aa_large.e600_r384_in1k | 83.244 | 16.756 | 96.392 | 3.608 | 32.59 | 384 |
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k | 82.99 | 17.01 | 96.67 | 3.33 | 11.07 | 320 |
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k | 82.364 | 17.636 | 96.256 | 3.744 | 11.07 | 256 |
model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
efficientnet_b0.ra4_e3600_r224_in1k | 79.364 | 20.636 | 94.754 | 5.246 | 5.29 | 256 |
efficientnet_b0.ra4_e3600_r224_in1k | 78.584 | 21.416 | 94.338 | 5.662 | 5.29 | 224 |
mobilenetv1_100h.ra4_e3600_r224_in1k | 76.596 | 23.404 | 93.272 | 6.728 | 5.28 | 256 |
mobilenetv1_100.ra4_e3600_r224_in1k | 76.094 | 23.906 | 93.004 | 6.996 | 4.23 | 256 |
mobilenetv1_100h.ra4_e3600_r224_in1k | 75.662 | 24.338 | 92.504 | 7.496 | 5.28 | 224 |
mobilenetv1_100.ra4_e3600_r224_in1k | 75.382 | 24.618 | 92.312 | 7.688 | 4.23 | 224 |
set_input_size()
added to vit and swin v1/v2 models to allow changing image size, patch size, window size after model creation.set_input_size
, always_partition
and strict_img_size
args have been added to __init__
to allow more flexible input size constraintstiny
< .5M param models for testing that are actually trained on ImageNet-1kmodel | top1 | top1_err | top5 | top5_err | param_count | img_size | crop_pct |
---|---|---|---|---|---|---|---|
test_efficientnet.r160_in1k | 47.156 | 52.844 | 71.726 | 28.274 | 0.36 | 192 | 1.0 |
test_byobnet.r160_in1k | 46.698 | 53.302 | 71.674 | 28.326 | 0.46 | 192 | 1.0 |
test_efficientnet.r160_in1k | 46.426 | 53.574 | 70.928 | 29.072 | 0.36 | 160 | 0.875 |
test_byobnet.r160_in1k | 45.378 | 54.622 | 70.572 | 29.428 | 0.46 | 160 | 0.875 |
test_vit.r160_in1k | 42.0 | 58.0 | 68.664 | 31.336 | 0.37 | 192 | 1.0 |
test_vit.r160_in1k | 40.822 | 59.178 | 67.212 | 32.788 | 0.37 | 160 | 0.875 |
model | top1 | top1_err | top5 | top5_err | param_count | img_size |
---|---|---|---|---|---|---|
mobilenetv4_hybrid_large.ix_e600_r384_in1k | 84.356 | 15.644 | 96.892 | 3.108 | 37.76 | 448 |
mobilenetv4_hybrid_large.ix_e600_r384_in1k | 83.990 | 16.010 | 96.702 | 3.298 | 37.76 | 384 |
mobilenetv4_hybrid_medium.ix_e550_r384_in1k | 83.394 | 16.606 | 96.760 | 3.240 | 11.07 | 448 |
mobilenetv4_hybrid_medium.ix_e550_r384_in1k | 82.968 | 17.032 | 96.474 | 3.526 | 11.07 | 384 |
mobilenetv4_hybrid_medium.ix_e550_r256_in1k | 82.492 | 17.508 | 96.278 | 3.722 | 11.07 | 320 |
mobilenetv4_hybrid_medium.ix_e550_r256_in1k | 81.446 | 18.554 | 95.704 | 4.296 | 11.07 | 256 |
timm
trained weights added:normalize=
flag for transforms, return non-normalized torch.Tensor with original dytpe (for chug
)Searching for Better ViT Baselines (For the GPU Poor)
weights and vit variants released. Exploring model shapes between Tiny and Base.model | top1 | top5 | param_count | img_size |
---|---|---|---|---|
vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k | 86.202 | 97.874 | 64.11 | 256 |
vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k | 85.418 | 97.48 | 60.4 | 256 |
vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k | 84.322 | 96.812 | 63.95 | 256 |
vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k | 83.906 | 96.684 | 60.23 | 256 |
vit_base_patch16_rope_reg1_gap_256.sbb_in1k | 83.866 | 96.67 | 86.43 | 256 |
vit_medium_patch16_rope_reg1_gap_256.sbb_in1k | 83.81 | 96.824 | 38.74 | 256 |
vit_betwixt_patch16_reg4_gap_256.sbb_in1k | 83.706 | 96.616 | 60.4 | 256 |
vit_betwixt_patch16_reg1_gap_256.sbb_in1k | 83.628 | 96.544 | 60.4 | 256 |
vit_medium_patch16_reg4_gap_256.sbb_in1k | 83.47 | 96.622 | 38.88 | 256 |
vit_medium_patch16_reg1_gap_256.sbb_in1k | 83.462 | 96.548 | 38.88 | 256 |
vit_little_patch16_reg4_gap_256.sbb_in1k | 82.514 | 96.262 | 22.52 | 256 |
vit_wee_patch16_reg1_gap_256.sbb_in1k | 80.256 | 95.360 | 13.42 | 256 |
vit_pwee_patch16_reg1_gap_256.sbb_in1k | 80.072 | 95.136 | 15.25 | 256 |
vit_mediumd_patch16_reg4_gap_256.sbb_in12k | N/A | N/A | 64.11 | 256 |
vit_betwixt_patch16_reg4_gap_256.sbb_in12k | N/A | N/A | 60.4 | 256 |
timm
models. See example in https://github.com/huggingface/pytorch-image-models/discussions/1232#discussioncomment-9320949forward_intermediates()
API refined and added to more models including some ConvNets that have other extraction methods.features_only=True
feature extraction. Remaining 34 architectures can be supported but based on priority requests.features_only=True
support for ViT models with flat hidden states or non-std module layouts (so far covering 'vit_*', 'twins_*', 'deit*', 'beit*', 'mvitv2*', 'eva*', 'samvit_*', 'flexivit*'
)forward_intermediates()
API that can be used with a feature wrapping module or directly.model = timm.create_model('vit_base_patch16_224')
final_feat, intermediates = model.forward_intermediates(input)
output = model.forward_head(final_feat) # pooling + classifier head
print(final_feat.shape)
torch.Size([2, 197, 768])
for f in intermediates:
print(f.shape)
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
torch.Size([2, 768, 14, 14])
print(output.shape)
torch.Size([2, 1000])
model = timm.create_model('eva02_base_patch16_clip_224', pretrained=True, img_size=512, features_only=True, out_indices=(-3, -2,))
output = model(torch.randn(2, 3, 512, 512))
for o in output:
print(o.shape)
torch.Size([2, 768, 32, 32])
torch.Size([2, 768, 32, 32])
PyTorch Image Models (timm
) is a collection of image models, layers, utilities, optimizers, schedulers, data-loaders / augmentations, and reference training / validation scripts that aim to pull together a wide variety of SOTA models with ability to reproduce ImageNet training results.
The work of many others is present here. I've tried to make sure all source material is acknowledged via links to github, arxiv papers, etc in the README, documentation, and code docstrings. Please let me know if I missed anything.
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated.
To see full list of optimizers w/ descriptions: timm.optim.list_optimizers(with_description=True)
Included optimizers available via timm.optim.create_optimizer_v2
factory method:
adabelief
an implementation of AdaBelief adapted from https://github.com/juntang-zhuang/Adabelief-Optimizer - https://arxiv.org/abs/2010.07468adafactor
adapted from FAIRSeq impl - https://arxiv.org/abs/1804.04235adafactorbv
adapted from Big Vision - https://arxiv.org/abs/2106.04560adahessian
by David Samuel - https://arxiv.org/abs/2006.00719adamp
and sgdp
by Naver ClovAI - https://arxiv.org/abs/2006.08217adan
an implementation of Adan adapted from https://github.com/sail-sg/Adan - https://arxiv.org/abs/2208.06677adopt
ADOPT adapted from https://github.com/iShohei220/adopt - https://arxiv.org/abs/2411.02853lamb
an implementation of Lamb and LambC (w/ trust-clipping) cleaned up and modified to support use with XLA - https://arxiv.org/abs/1904.00962laprop
optimizer from https://github.com/Z-T-WANG/LaProp-Optimizer - https://arxiv.org/abs/2002.04839lars
an implementation of LARS and LARC (w/ trust-clipping) - https://arxiv.org/abs/1708.03888lion
and implementation of Lion adapted from https://github.com/google/automl/tree/master/lion - https://arxiv.org/abs/2302.06675lookahead
adapted from impl by Liam - https://arxiv.org/abs/1907.08610madgrad
an implementation of MADGRAD adapted from https://github.com/facebookresearch/madgrad - https://arxiv.org/abs/2101.11075mars
MARS optimizer from https://github.com/AGI-Arena/MARS - https://arxiv.org/abs/2411.10438nadam
an implementation of Adam w/ Nesterov momentumnadamw
an implementation of AdamW (Adam w/ decoupled weight-decay) w/ Nesterov momentum. A simplified impl based on https://github.com/mlcommons/algorithmic-efficiencynovograd
by Masashi Kimura - https://arxiv.org/abs/1905.11286radam
by Liyuan Liu - https://arxiv.org/abs/1908.03265rmsprop_tf
adapted from PyTorch RMSProp by myself. Reproduces much improved Tensorflow RMSProp behavioursgdw
and implementation of SGD w/ decoupled weight-decayfused<name>
optimizers by name with NVIDIA Apex installedbnb<name>
optimizers by name with BitsAndBytes installedcadamw
, clion
, and more 'Cautious' optimizers from https://github.com/kyleliang919/C-Optim - https://arxiv.org/abs/2411.16085adam
, adamw
, rmsprop
, adadelta
, adagrad
, and sgd
pass through to torch.optim
implementationsSeveral (less common) features that I often utilize in my projects are included. Many of their additions are the reason why I maintain my own set of models, instead of using others' via PIP:
get_classifier
and reset_classifier
forward_features
(see documentation)create_model(name, features_only=True, out_indices=..., output_stride=...)
out_indices
creation arg specifies which feature maps to return, these indices are 0 based and generally correspond to the C(i + 1)
feature level.output_stride
creation arg controls output stride of the network by using dilated convolutions. Most networks are stride 32 by default. Not all networks support this..feature_info
memberstep
, cosine
w/ restarts, tanh
w/ restarts, plateau
Model validation results can be found in the results tables
The official documentation can be found at https://huggingface.co/docs/hub/timm. Documentation contributions are welcome.
Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide by Chris Hughes is an extensive blog post covering many aspects of timm
in detail.
timmdocs is an alternate set of documentation for timm
. A big thanks to Aman Arora for his efforts creating timmdocs.
paperswithcode is a good resource for browsing the models within timm
.
The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See documentation.
One of the greatest assets of PyTorch is the community and their contributions. A few of my favourite resources that pair well with the models and components here are listed below.
The code here is licensed Apache 2.0. I've taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc. I've made an effort to avoid any GPL / LGPL conflicts. That said, it is your responsibility to ensure you comply with licenses here and conditions of any dependent licenses. Where applicable, I've linked the sources/references for various components in docstrings. If you think I've missed anything please create an issue.
So far all of the pretrained weights available here are pretrained on ImageNet with a select few that have some additional pretraining (see extra note below). ImageNet was released for non-commercial research purposes only (https://image-net.org/download). It's not clear what the implications of that are for the use of pretrained weights from that dataset. Any models I have trained with ImageNet are done for research purposes and one should assume that the original dataset license applies to the weights. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product.
Several weights included or references here were pretrained with proprietary datasets that I do not have access to. These include the Facebook WSL, SSL, SWSL ResNe(Xt) and the Google Noisy Student EfficientNet models. The Facebook models have an explicit non-commercial license (CC-BY-NC 4.0, https://github.com/facebookresearch/semi-supervised-ImageNet1K-models, https://github.com/facebookresearch/WSL-Images). The Google models do not appear to have any restriction beyond the Apache 2.0 license (and ImageNet concerns). In either case, you should contact Facebook or Google with any questions.
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
FAQs
PyTorch Image Models
We found that timm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Security News
CISOs are racing to adopt AI for cybersecurity, but hurdles in budgets and governance may leave some falling behind in the fight against cyber threats.
Research
Security News
Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.