Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
NEWS:
Please check all available models here
COMET requires python 3.8 or above. Simple installation from PyPI
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
Note: To use some COMET models such as Unbabel/wmt22-cometkiwi-da
you must acknowledge it's license on Hugging Face Hub and log-in into hugging face hub.
To develop locally install run the following commands:
git clone https://github.com/Unbabel/COMET
cd COMET
pip install poetry
poetry install
For development, you can run the CLI tools directly, e.g.,
PYTHONPATH=. ./comet/cli/score.py
Test examples:
echo -e "10 到 15 分钟可以送到吗\nPode ser entregue dentro de 10 a 15 minutos?" >> src.txt
echo -e "Can I receive my food in 10 to 15 minutes?\nCan it be delivered in 10 to 15 minutes?" >> hyp1.txt
echo -e "Can it be delivered within 10 to 15 minutes?\nCan you send it for 10 to 15 minutes?" >> hyp2.txt
echo -e "Can it be delivered between 10 to 15 minutes?\nCan it be delivered between 10 to 15 minutes?" >> ref.txt
comet-score -s src.txt -t hyp1.txt -r ref.txt
you can set the number of gpus using
--gpus
(0 to test on CPU).
For better error analysis, you can use XCOMET models such as Unbabel/XCOMET-XL
, you can export the identified errors using the --to_json
flag:
comet-score -s src.txt -t hyp1.txt -r ref.txt --model Unbabel/XCOMET-XL --to_json output.json
Scoring multiple systems:
comet-score -s src.txt -t hyp1.txt hyp2.txt -r ref.txt
WMT test sets via SacreBLEU:
comet-score -d wmt22:en-de -t PATH/TO/TRANSLATIONS
Scoring with context:
echo -e "Pies made from apples like these. </s> Oh, they do look delicious.\nOh, they do look delicious." >> src.txt
echo -e "Des tartes faites avec des pommes comme celles-ci. </s> Elles ont l’air delicieux.\nElles ont l’air delicieux" >> hyp1.txt
echo -e "Des tartes faites avec des pommes comme celles-ci. </s> Ils ont l’air delicieux.\nIls ont l’air delicieux." >> hyp2.txt
where </s>
is the separator token of the specific tokenizer (here: xlm-roberta-large
) that the underlying model uses.
comet-score -s src.txt -t hyp1.txt hyp2.txt --model Unbabel/wmt20-comet-qe-da --enable-context
If you are only interested in a system-level score use the following command:
comet-score -s src.txt -t hyp1.txt -r ref.txt --quiet --only_system
comet-score -s src.txt -t hyp1.txt --model Unbabel/wmt22-cometkiwi-da
Note: To use the Unbabel/wmt23-cometkiwi-da-xl
you first have to acknowledge its license on Hugging Face Hub.
When comparing multiple MT systems we encourage you to run the comet-compare
command to get statistical significance with Paired T-Test and bootstrap resampling (Koehn, et al 2004).
comet-compare -s src.de -t hyp1.en hyp2.en hyp3.en -r ref.en
The MBR command allows you to rank translations and select the best one according to COMET metrics. For more details you can read our paper on Quality-Aware Decoding for Neural Machine Translation.
comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt --num_sample [X] -o [OUTPUT_FILE].txt
If working with a very large candidate list you can use --rerank_top_k
flag to prune the topK most promissing candidates according to a reference-free metric.
Example for a candidate list of 1000 samples:
comet-mbr -s [SOURCE].txt -t [MT_SAMPLES].txt -o [OUTPUT_FILE].txt --num_sample 1000 --rerank_top_k 100 --gpus 4 --qe_model Unbabel/wmt23-cometkiwi-da-xl
Your source and samples file should be formatted in this way.
Within COMET, there are several evaluation models available. You can refer to the MODELS page for a comprehensive list of all available models. Here is a concise list of the main reference-based and reference-free models:
Unbabel/wmt22-comet-da
- This model employs a reference-based regression approach and is built upon the XLM-R architecture. It has been trained on direct assessments from WMT17 to WMT20 and provides scores ranging from 0 to 1, where 1 signifies a perfect translation.Unbabel/wmt22-cometkiwi-da
- This reference-free model employs a regression approach and is built on top of InfoXLM. It has been trained using direct assessments from WMT17 to WMT20, as well as direct assessments from the MLQE-PE corpus. Similar to other models, it generates scores ranging from 0 to 1. For those interested, we also offer larger versions of this model: Unbabel/wmt23-cometkiwi-da-xl
with 3.5 billion parameters and Unbabel/wmt23-cometkiwi-da-xxl
with 10.7 billion parameters.Unbabel/XCOMET-XXL
- Our latest model is trained to identify error spans and assign a final quality score, resulting in an explainable neural metric. We offer this version in XXL with 10.7 billion parameters, as well as the XL variant with 3.5 billion parameters (Unbabel/XCOMET-XL
). These models have demonstrated the highest correlation with MQM and are our best performing evaluation models.Please be aware that different models may be subject to varying licenses. To learn more, kindly refer to the LICENSES.models and model licenses sections.
If you intend to compare your results with papers published before 2022, it's likely that they used older evaluation models. In such cases, please refer to Unbabel/wmt20-comet-da
and Unbabel/wmt20-comet-qe-da
, which were the primary checkpoints used in previous versions (<2.0) of COMET.
Also, UniTE Metric developed by the NLP2CT Lab at the University of Macau and Alibaba Group can be used directly through COMET check here for more details.
New: An excellent reference for learning how to interpret machine translation metrics is the analysis paper by Kocmi et al. (2024), available at this link.
When using COMET to evaluate machine translation, it's important to understand how to interpret the scores it produces.
In general, COMET models are trained to predict quality scores for translations. These scores are typically normalized using a z-score transformation to account for individual differences among annotators. While the raw score itself does not have a direct interpretation, it is useful for ranking translations and systems according to their quality.
However, since 2022 we have introduced a new training approach that scales the scores between 0 and 1. This makes it easier to interpret the scores: a score close to 1 indicates a high-quality translation, while a score close to 0 indicates a translation that is no better than random chance. Also, with the introduction of XCOMET models we can now analyse which text spans are part of minor, major or critical errors according to the MQM typology.
It's worth noting that when using COMET to compare the performance of two different translation systems, it's important to run the comet-compare
command to obtain statistical significance measures. This command compares the output of two systems using a statistical hypothesis test, providing an estimate of the probability that the observed difference in scores between the systems is due to chance. This is an important step to ensure that any differences in scores between systems are statistically significant.
Overall, the added interpretability of scores in the latest COMET models, combined with the ability to assess statistical significance between systems using comet-compare
, make COMET a valuable tool for evaluating machine translation.
All the above mentioned models are build on top of XLM-R (variants) which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskrit, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
If you are interested in COMET metrics for african languages please visit afriCOMET.
from comet import download_model, load_from_checkpoint
# Choose your model from Hugging Face Hub
model_path = download_model("Unbabel/XCOMET-XL")
# or for example:
# model_path = download_model("Unbabel/wmt22-comet-da")
# Load the model checkpoint:
model = load_from_checkpoint(model_path)
# Data must be in the following format:
data = [
{
"src": "10 到 15 分钟可以送到吗",
"mt": "Can I receive my food in 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
},
{
"src": "Pode ser entregue dentro de 10 a 15 minutos?",
"mt": "Can you send it for 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
}
]
# Call predict method:
model_output = model.predict(data, batch_size=8, gpus=1)
As output, we get the following information:
# Sentence-level scores (list)
>>> model_output.scores
[0.9822099208831787, 0.9599897861480713]
# System-level score (float)
>>> model_output.system_score
0.971099853515625
# Detected error spans (list of list of dicts)
>>> model_output.metadata.error_spans
[
[{'confidence': 0.4160953164100647,
'end': 21,
'severity': 'minor',
'start': 13,
'text': 'my food'}],
[{'confidence': 0.40004390478134155,
'end': 19,
'severity': 'minor',
'start': 3,
'text': 'you send it for'}]
]
However, note that not all COMET models return metadata with detected error spans.
Check this notebook for a minimal example on how you can combine xCOMET with xTower to generate a natural language explanation for each error span.
For the Portuguese-English example above, we would call xTower with the following prompt:
You are provided with a Source, Translation, Translation quality analysis, and Translation quality score (weak, moderate, good, excellent, best). The Translation quality analysis contains a translation with marked error spans with different levels of severity (minor or major). Given this information, generate an explanation for each error and a fully correct translation.
Portuguese source: Pode ser entregue dentro de 10 a 15 minutos?
English translation: Can you send it for 10 to 15 minutes?
Translation quality analysis: Can<error1 severity='minor'>
you send it for</error1>
10 to 15 minutes?
Translation quality score: excellent
And get this as output:
Explanation for error1: The phrase "Can you send it for 10 to 15 minutes?" is a mistranslation of the original Portuguese sentence. The correct interpretation should focus on the delivery time rather than the duration of sending. The original sentence is asking about the delivery time, not the duration of sending.
Translation correction: Can it be delivered within 10 to 15 minutes?
For more information, check xTower documentation.
Instead of using pretrained models your can train your own model with the following command:
comet-train --cfg configs/models/{your_model_config}.yaml
You can then use your own metric to score:
comet-score -s src.de -t hyp1.en -r ref.en --model PATH/TO/CHECKPOINT
You can also upload your model to Hugging Face Hub. Use Unbabel/wmt22-comet-da
as example. Then you can use your model directly from the hub.
In order to run the toolkit tests you must run the following command:
poetry run coverage run --source=comet -m unittest discover
poetry run coverage report -m # Expected coverage 76%
Note: Testing on CPU takes a long time
If you use COMET please cite our work and don't forget to say which model you used!
xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection
Scaling up CometKiwi: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task
Are References Really Needed? Unbabel-IST 2021 Submission for the Metrics Shared Task
COMET - Deploying a New State-of-the-art MT Evaluation Metric in Production
FAQs
High-quality Machine Translation Evaluation
We found that unbabel-comet demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.