LightlySSL is a computer vision framework for self-supervised learning.
For a commercial version with more features, including Docker support and pretraining
models for embedding, classification, detection, and segmentation tasks with
a single command, please contact sales@lightly.ai.
We've also built a whole platform on top, with additional features for active learning
and data curation. If you're interested in the
Lightly Worker Solution to easily process millions of samples and run powerful algorithms
on your data, check out lightly.ai. It's free to get started!
Features
This self-supervised learning framework offers the following features:
- Modular framework, which exposes low-level building blocks such as loss functions and
model heads.
- Easy to use and written in a PyTorch-like style.
- Supports custom backbone models for self-supervised pre-training.
- Support for distributed training using PyTorch Lightning.
Supported Models
You can find sample code for all the supported models here. We provide PyTorch, PyTorch Lightning,
and PyTorch Lightning distributed examples for all models to kickstart your project.
Models:
Tutorials
Want to jump to the tutorials and see Lightly in action?
Community and partner projects:
Quick Start
Lightly requires Python 3.7+. We recommend installing Lightly in a Linux or OSX environment. Python 3.13 is not yet supported, as PyTorch itself lacks Python 3.13 compatibility.
Dependencies
Due to the modular nature of the Lightly package some modules can be used with older versions of dependencies. However, to use all features as of today lightly requires the following dependencies:
Lightly is compatible with PyTorch and PyTorch Lightning v2.0+!
Installation
You can install Lightly and its dependencies from PyPI with:
pip3 install lightly
We strongly recommend installing Lightly in a dedicated virtualenv to avoid conflicts with your system packages.
Lightly in Action
With Lightly, you can use the latest self-supervised learning methods in a modular
way using the full power of PyTorch. Experiment with various backbones,
models, and loss functions. The framework has been designed to be easy to use
from the ground up. Find more examples in our docs.
import torch
import torchvision
from lightly import loss
from lightly import transforms
from lightly.data import LightlyDataset
from lightly.models.modules import heads
class SimCLR(torch.nn.Module):
def __init__(self, backbone):
super().__init__()
self.backbone = backbone
self.projection_head = heads.SimCLRProjectionHead(
input_dim=512,
hidden_dim=512,
output_dim=128,
)
def forward(self, x):
features = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(features)
return z
backbone = torchvision.models.resnet18()
backbone.fc = torch.nn.Identity()
model = SimCLR(backbone)
transform = transforms.SimCLRTransform(input_size=32, cj_prob=0.5)
dataset = LightlyDataset(input_dir="./my/cute/cats/dataset/", transform=transform)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=128,
shuffle=True,
)
criterion = loss.NTXentLoss(temperature=0.5)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, weight_decay=1e-6)
for epoch in range(10):
for (view0, view1), targets, filenames in dataloader:
z0 = model(view0)
z1 = model(view1)
loss = criterion(z0, z1)
loss.backward()
optimizer.step()
optimizer.zero_grad()
print(f"loss: {loss.item():.5f}")
You can easily use another model like SimSiam by swapping the model and the
loss function.
class SimSiam(torch.nn.Module):
def __init__(self, backbone):
super().__init__()
self.backbone = backbone
self.projection_head = heads.SimSiamProjectionHead(512, 512, 128)
self.prediction_head = heads.SimSiamPredictionHead(128, 64, 128)
def forward(self, x):
features = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(features)
p = self.prediction_head(z)
z = z.detach()
return z, p
model = SimSiam(backbone)
criterion = loss.NegativeCosineSimilarity()
You can find a more complete example for SimSiam here.
Use PyTorch Lightning to train the model:
from pytorch_lightning import LightningModule, Trainer
class SimCLR(LightningModule):
def __init__(self):
super().__init__()
resnet = torchvision.models.resnet18()
resnet.fc = torch.nn.Identity()
self.backbone = resnet
self.projection_head = heads.SimCLRProjectionHead(512, 512, 128)
self.criterion = loss.NTXentLoss()
def forward(self, x):
features = self.backbone(x).flatten(start_dim=1)
z = self.projection_head(features)
return z
def training_step(self, batch, batch_index):
(view0, view1), _, _ = batch
z0 = self.forward(view0)
z1 = self.forward(view1)
loss = self.criterion(z0, z1)
return loss
def configure_optimizers(self):
optim = torch.optim.SGD(self.parameters(), lr=0.06)
return optim
model = SimCLR()
trainer = Trainer(max_epochs=10, devices=1, accelerator="gpu")
trainer.fit(model, dataloader)
See our docs for a full PyTorch Lightning example.
Or train the model on 4 GPUs:
criterion = loss.NTXentLoss(gather_distributed=True)
trainer = Trainer(
max_epochs=10,
devices=4,
accelerator="gpu",
strategy="ddp",
sync_batchnorm=True,
use_distributed_sampler=True,
)
trainer.fit(model, dataloader)
We provide multi-GPU training examples with distributed gather and synchronized BatchNorm.
Have a look at our docs regarding distributed training.
Benchmarks
Implemented models and their performance on various datasets. Hyperparameters are not
tuned for maximum accuracy. For detailed results and more information about the benchmarks click
here.
ImageNet1k
ImageNet1k benchmarks
Note: Evaluation settings are based on these papers:
See the benchmarking scripts for details.
Model | Backbone | Batch Size | Epochs | Linear Top1 | Finetune Top1 | kNN Top1 | Tensorboard | Checkpoint |
---|
BarlowTwins | Res50 | 256 | 100 | 62.9 | 72.6 | 45.6 | link | link |
BYOL | Res50 | 256 | 100 | 62.5 | 74.5 | 46.0 | link | link |
DINO | Res50 | 128 | 100 | 68.2 | 72.5 | 49.9 | link | link |
MAE | ViT-B/16 | 256 | 100 | 46.0 | 81.3 | 11.2 | link | link |
MoCoV2 | Res50 | 256 | 100 | 61.5 | 74.3 | 41.8 | link | link |
SimCLR* | Res50 | 256 | 100 | 63.2 | 73.9 | 44.8 | link | link |
SimCLR* + DCL | Res50 | 256 | 100 | 65.1 | 73.5 | 49.6 | link | link |
SimCLR* + DCLW | Res50 | 256 | 100 | 64.5 | 73.2 | 48.5 | link | link |
SwAV | Res50 | 256 | 100 | 67.2 | 75.4 | 49.5 | link | link |
TiCo | Res50 | 256 | 100 | 49.7 | 72.7 | 26.6 | link | link |
VICReg | Res50 | 256 | 100 | 63.0 | 73.7 | 46.3 | link | link |
*We use square root learning rate scaling instead of linear scaling as it yields
better results for smaller batch sizes. See Appendix B.1 in the SimCLR paper.
ImageNet100
ImageNet100 benchmarks detailed results
Imagenette
Imagenette benchmarks detailed results
CIFAR-10
CIFAR-10 benchmarks detailed results
Terminology
Below you can see a schematic overview of the different concepts in the package.
The terms in bold are explained in more detail in our documentation.
Next Steps
Head to the documentation and see the things you can achieve with Lightly!
Development
To install dev dependencies (for example to contribute to the framework) you can use the following command:
pip3 install -e ".[dev]"
For more information about how to contribute have a look here.
Running Tests
Unit tests are within the tests directory and we recommend running them using
pytest. There are two test configurations
available. By default, only a subset will be run:
make test-fast
To run all tests (including the slow ones) you can use the following command:
make test
To test a specific file or directory use:
pytest <path to file or directory>
Code Formatting
To format code with black and isort run:
make format
Further Reading
Self-Supervised Learning:
FAQ
-
Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?
- Self-supervised learning has become increasingly popular among scientists over the last years because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
-
How can I contribute?
- Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
-
Is this framework for free?
- Yes, this framework is completely free to use and we provide the source code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind Lightly is committed to keep this framework open-source.
-
If this framework is free, how is the company behind Lightly making money?
- Training self-supervised models is only one part of our solution.
The company behind Lightly focuses on processing and analyzing embeddings created by self-supervised models.
By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently.
As the Lightly Solution is a freemium product, you can try it out for free. However, we will charge for some features.
- In any case this framework will always be free to use, even for commercial purposes.
Lightly in Research
Company behind this Open Source Framework
Lightly is a spin-off from ETH Zurich that helps companies
build efficient active learning pipelines to select the most relevant data for their models.
You can find out more about the company and it's services by following the links below:
Back to top🚀