Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

ctranslate2

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

ctranslate2

Fast inference engine for Transformer models

  • 4.5.0
  • Source
  • PyPI
  • Socket score

Maintainers
3

CI PyPI version Documentation Gitter Forum

CTranslate2

CTranslate2 is a C++ and Python library for efficient inference with Transformer models.

The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.

The following model types are currently supported:

  • Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
  • Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
  • Encoder-only models: BERT, DistilBERT, XLM-RoBERTa

Compatible models should be first converted into an optimized model format. The library includes converters for multiple frameworks:

The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.

Key features

  • Fast and efficient execution on CPU and GPU
    The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced optimizations: layer fusion, padding removal, batch reordering, in-place operations, caching mechanism, etc.
  • Quantization and reduced precision
    The model serialization and computation support weights with reduced precision: 16-bit floating points (FP16), 16-bit brain floating points (BF16), 16-bit integers (INT16), 8-bit integers (INT8) and AWQ quantization (INT4).
  • Multiple CPU architectures support
    The project supports x86-64 and AArch64/ARM64 processors and integrates multiple backends that are optimized for these platforms: Intel MKL, oneDNN, OpenBLAS, Ruy, and Apple Accelerate.
  • Automatic CPU detection and code dispatch
    One binary can include multiple backends (e.g. Intel MKL and oneDNN) and instruction set architectures (e.g. AVX, AVX2) that are automatically selected at runtime based on the CPU information.
  • Parallel and asynchronous execution
    Multiple batches can be processed in parallel and asynchronously using multiple GPUs or CPU cores.
  • Dynamic memory usage
    The memory usage changes dynamically depending on the request size while still meeting performance requirements thanks to caching allocators on both CPU and GPU.
  • Lightweight on disk
    Quantization can make the models 4 times smaller on disk with minimal accuracy loss.
  • Simple integration
    The project has few dependencies and exposes simple APIs in Python and C++ to cover most integration needs.
  • Configurable and interactive decoding
    Advanced decoding features allow autocompleting a partial sequence and returning alternatives at a specific location in the sequence.
  • Support tensor parallelism for distributed inference
    Very large model can be split into multiple GPUs. Following this documentation to set up the required environment.

Some of these features are difficult to achieve with standard deep learning frameworks and are the motivation for this project.

Installation and usage

CTranslate2 can be installed with pip:

pip install ctranslate2

The Python module is used to convert models and can translate or generate text with few lines of code:

translator = ctranslate2.Translator(translation_model_path)
translator.translate_batch(tokens)

generator = ctranslate2.Generator(generation_model_path)
generator.generate_batch(start_tokens)

See the documentation for more information and examples.

Benchmarks

We translate the En->De test set newstest2014 with multiple models:

  • OpenNMT-tf WMT14: a base Transformer trained with OpenNMT-tf on the WMT14 dataset (4.5M lines)
  • OpenNMT-py WMT14: a base Transformer trained with OpenNMT-py on the WMT14 dataset (4.5M lines)
  • OPUS-MT: a base Transformer trained with Marian on all OPUS data available on 2020-02-26 (81.9M lines)

The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.

Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.

CPU
Tokens per secondMax. memoryBLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0)209.22653MB26.93
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1)275.82012MB26.77
- int8323.31359MB26.72
CTranslate2 3.6.0658.8849MB26.77
- int16733.0672MB26.82
- int8860.2529MB26.78
- int8 + vmap1126.2598MB26.64
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1)147.32332MB27.90
Marian 1.11.0344.57605MB27.93
- int16330.25901MB27.65
- int8355.84763MB27.27
CTranslate2 3.6.0525.0721MB27.92
- int16596.1660MB27.53
- int8696.1516MB27.65

Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.

GPU
Tokens per secondMax. GPU memoryMax. CPU memoryBLEU
OpenNMT-tf WMT14 model
OpenNMT-tf 2.31.0 (with TensorFlow 2.11.0)1483.53031MB3122MB26.94
OpenNMT-py WMT14 model
OpenNMT-py 3.0.4 (with PyTorch 1.13.1)1795.22973MB3099MB26.77
FasterTransformer 5.36979.02402MB1131MB26.77
- float168592.51360MB1135MB26.80
CTranslate2 3.6.06634.71261MB953MB26.77
- int88567.21005MB807MB26.85
- float1610990.7941MB807MB26.77
- int8 + float168725.4813MB800MB26.83
OPUS-MT model
Transformers 4.26.1 (with PyTorch 1.13.1)1022.94097MB2109MB27.90
Marian 1.11.03241.03381MB2156MB27.92
- float163962.43239MB1976MB27.94
CTranslate2 3.6.05876.41197MB754MB27.92
- int87521.91005MB792MB27.79
- float169296.7909MB814MB27.90
- int8 + float168362.7813MB766MB27.90

Executed with CUDA 11 on a g5.xlarge Amazon EC2 instance equipped with a NVIDIA A10G GPU (driver version: 510.47.03).

Additional resources

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc