Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

super-image

Package Overview
Dependencies
Maintainers
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

super-image

State-of-the-art image super resolution models for PyTorch.

  • 0.1.7
  • PyPI
  • Socket score

Maintainers
2

super-image

downloads documentation GitHub pypi version demo app

the super-image library's MSRN x4 model

State-of-the-art image super resolution models for PyTorch.

Installation

With pip:

pip install super-image

Demo

Try the various models on your images instantly.

Hugging Face Spaces

Quick Start

Quickly utilise pre-trained models for upscaling your images 2x, 3x and 4x. See the full list of models below.

Open In Colab

from super_image import EdsrModel, ImageLoader
from PIL import Image
import requests

url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)

model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
inputs = ImageLoader.load_image(image)
preds = model(inputs)

ImageLoader.save_image(preds, './scaled_2x.png')
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png')

Pre-trained Models

Pre-trained models are available at various scales and hosted at the awesome huggingface_hub. By default the models were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).

The leaderboard below shows the PSNR / SSIM metrics for each model at various scales on various test sets (Set5, Set14, BSD100, Urban100). The higher the better. All training was to 1000 epochs (some publications, like a2n, train to >1000 epochs in their experiments).

Scale x2

RankModelParamsSet5Set14BSD100Urban100
1drln-bam34m38.23/0.961433.95/0.920633.95/0.926932.81/0.9339
2edsr41m38.19/0.961233.99/0.921533.89/0.926632.68/0.9331
3msrn5.9m38.08/0.960933.75/0.918333.82/0.925832.14/0.9287
4mdsr2.7m38.04/0.960833.71/0.918433.79/0.925632.14/0.9283
5msrn-bam5.9m38.02/0.960833.73/0.918633.78/0.925332.08/0.9276
6edsr-base1.5m38.02/0.960733.66/0.918033.77/0.925432.04/0.9276
7mdsr-bam2.7m38/0.960733.68/0.918233.77/0.925332.04/0.9272
8awsrn-bam1.4m37.99/0.960633.66/0.91833.76/0.925331.95/0.9265
9a2n1.0m37.87/0.960233.54/0.917133.67/0.924431.71/0.9240
10carn1.6m37.89/0.960233.53/0.917333.66/0.924231.62/0.9229
11carn-bam1.6m37.83/0.9633.51/0.916633.64/0.92431.53/0.922
12pan260k37.77/0.959933.42/0.916233.6/0.923531.31/0.9197
13pan-bam260k37.7/0.959633.4/0.916133.6/0.923431.35/0.92

Scale x3

RankModelParamsSet5Set14BSD100Urban100
1drln-bam34m35.3/0.942231.27/0.862429.78/0.822429.82/0.8828
1edsr44m35.31/0.942131.18/0.86229.77/0.822429.75/0.8825
1msrn6.1m35.12/0.940931.08/0.859329.67/0.819829.31/0.8743
2mdsr2.9m35.11/0.940631.06/0.859329.66/0.819629.29/0.8738
3msrn-bam5.9m35.13/0.940831.06/0.858829.65/0.819629.26/0.8736
4mdsr-bam2.9m35.07/0.940231.04/0.858229.62/0.818829.16/0.8717
5edsr-base1.5m35.01/0.940231.01/0.858329.63/0.819029.19/0.8722
6awsrn-bam1.5m35.05/0.940331.01/0.858129.63/0.818829.14/0.871
7carn1.6m34.88/0.939130.93/0.856629.56/0.817328.95/0.867
8a2n1.0m34.8/0.938730.94/0.856829.56/0.817328.95/0.8671
9carn-bam1.6m34.82/0.938530.9/0.855829.54/0.816628.84/0.8648
10pan-bam260k34.62/0.937130.83/0.854529.47/0.815328.64/0.861
11pan260k34.64/0.937630.8/0.854429.47/0.81528.61/0.8603

Scale x4

RankModelParamsSet5Set14BSD100Urban100
1drln35m32.55/0.89928.96/0.790128.65/0.769226.56/0.7998
2drln-bam34m32.49/0.898628.94/0.789928.63/0.768626.53/0.7991
3edsr43m32.5/0.898628.92/0.789928.62/0.768926.53/0.7995
4msrn6.1m32.19/0.895128.78/0.786228.53/0.765726.12/0.7866
5msrn-bam5.9m32.26/0.895528.78/0.785928.51/0.765126.10/0.7857
6mdsr2.8m32.26/0.895328.77/0.785628.53/0.765326.07/0.7851
7mdsr-bam2.9m32.19/0.894928.73/0.784728.50/0.764526.02/0.7834
8awsrn-bam1.6m32.13/0.894728.75/0.785128.51/0.764726.03/0.7838
9edsr-base1.5m32.12/0.894728.72/0.784528.50/0.764426.02/0.7832
10a2n1.0m32.07/0.893328.68/0.783028.44/0.762425.89/0.7787
11carn1.6m32.05/0.893128.67/0.782828.44/0.762525.85/0.7768
12carn-bam1.6m32.0/0.892328.62/0.782228.41/0.761425.77/0.7741
13pan270k31.92/0.891528.57/0.780228.35/0.759525.63/0.7692
14pan-bam270k31.9/0.891128.54/0.779528.32/0.759125.6/0.7691
15han16m31.21/0.877828.18/0.771228.09/0.753325.1/0.7497
16rcan-bam15m30.8/0.870127.91/0.764827.91/0.747724.75/0.7346

You can find a notebook to easily run evaluation on pretrained models below:

Open In Colab

Train Models

We need the huggingface datasets library to download the data:

pip install datasets

The following code gets the data and preprocesses/augments the data.

from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop

augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
    .map(augment_five_crop, batched=True, desc="Augmenting Dataset")                                # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset)                                                     # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation'))      # prepare the eval dataset for the PyTorch DataLoader

The training code is provided below:

from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig

training_args = TrainingArguments(
    output_dir='./results',                 # output directory
    num_train_epochs=1000,                  # total number of training epochs
)

config = EdsrConfig(
    scale=4,                                # train a model to upscale 4x
)
model = EdsrModel(config)

trainer = Trainer(
    model=model,                         # the instantiated model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=eval_dataset            # evaluation dataset
)

trainer.train()

Open In Colab

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc