Multilingual-CLIP
OpenAI CLIP text encoders for any language
Colab Notebook
·
Pre-trained Models
·
Report Bug
Overview
OpenAI recently released the paper Learning Transferable Visual Models From Natural Language Supervision in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective.
CLIP consists of two separate models, a visual encoder and a text encoder. These were trained on a wooping 400 Million images and corresponding captions.
OpenAI has since released a set of their smaller CLIP models, which can be found on the official CLIP Github.
This repository contains
- Pre-trained CLIP-Text encoders for multiple languages
- Pytorch & Tensorflow inference code
- Tensorflow training code
Requirements
While it is possible that other versions works equally fine, we have worked with the following:
- Python = 3.6.9
- Transformers = 4.8.1
Install
pip install multilingual-clip torch
You can also choose to pip install tensorflow
instead of torch.
Inference Usage
Inference code for Tensorflow is also available in inference_example.py
from multilingual_clip import pt_multilingual_clip
import transformers
texts = [
'Three blind horses listening to Mozart.',
'Älgen är skogens konung!',
'Wie leben Eisbären in der Antarktis?',
'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-L-14'
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
embeddings = model.forward(texts, tokenizer)
print(embeddings.shape)
Install for development
Setup a virtualenv:
python3 -m venv .env
source .env/bin/activate
pip install -e .
Pre-trained Models
Every text encoder is a Huggingface available transformer, with an additional linear layer on top. For more information of a specific model, click the Model Name to see its model card.
Validation & Training Curves
Following is a table of the Txt2Img @10-Recal for the humanly tanslated MS-COCO testset.
The training curves for these models are available at this Weights and Biases.
Legacy Usage and Models
Older versions of M-CLIP had the linear weights stored separately from Huggingface. Whilst the new models have them directly incorporated in the Huggingface repository. More information about these older models can be found in this section.
Click for more information
Download CLIP Model
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
Replace cudatoolkit=11.0
above with the appropriate CUDA version on your machine or cpuonly
when installing on a machine without a GPU.
For more information please see the official CLIP repostitory.
Download Linear Weights
$ bash legacy_get-weights.sh
Inference
from multilingual_clip import multilingual_clip
print(multilingual_clip.AVAILABLE_MODELS.keys())
model = multilingual_clip.load_model('M-BERT-Distil-40')
embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
For a more elaborate example, comparing the textual embeddings to the CLIP image embeddings see this colab notebook.
Legacy Pre-trained Models
Every text encoder is a Huggingface available transformer, with an additional linear layer on top. Neither of the models have been extensively tested, but for more information and qualitative test results for a specific model, click the Model Name to see its model card.
*** Make sure to update to the most recent version of the repostitory when downloading a new model, and re-run the shell script to download the Linear Weights. ***
Training a new model
This folder contains the code used for training the above models. If you wsh to train your own model you must do the following things:
- Prepare a set of translated sentence pairs from English -> Your Language(s)
- Compute regular CLIP-Text embeddings for the English sentences.
- Edit Training.py to load your data.
- Train a new CLIP-Text encoder via Teacher Learning
Pre-computed CLIP Embeddings & Translaton Data
This Google Drive folder contains both pre-computed CLIP-Text Embeddings for a large porton of the the image captions of GCC + MSCOCO + VizWiz.
The Google Drive folder also contains the translation data used to train the currently available models.
Good Luck
Contribution
If you have trained a CLIP Text encoder specific to your language, or another model covering a language not supported here, Please feel free to contact us and we will either upload your model and credit you, or simply link to your already uploaded model.
Contact
If you have questions regarding the code or otherwise related to this Github page, please open an issue.
For other purposes, feel free to contact me directly at: Fredrik.Carlsson@ri.se
Acknowledgements
License
Distributed under the MIT License. See LICENSE
for more information.