x2vec, Towhee is all you need!
ENGLISH | 中文文档
Towhee makes it easy to build neural data processing pipelines for AI applications.
We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks.
You can use Towhee's Pythonic API to build a prototype of your pipeline and
automatically optimize it for production-ready environments.
:art: Various Modalities: Towhee supports data processing on a variety of modalities, including images, videos, text, audio, molecular structures, etc.
:mortar_board: SOTA Models: Towhee provides SOTA models across 5 fields (CV, NLP, Multimodal, Audio, Medical), 15 tasks, and 140+ model architectures. These include BERT, CLIP, ViT, SwinTransformer, MAE, and data2vec, all pretrained and ready to use.
:package: Data Processing: Towhee also provides traditional methods alongside neural network models to help you build practical data processing pipelines. We have a rich pool of operators available, such as video decoding, audio slicing, frame sampling, feature vector dimension reduction, ensembling, and database operations.
:snake: Pythonic API: Towhee includes a Pythonic method-chaining API for describing custom data processing pipelines. We also support schemas, which makes processing unstructured data as easy as handling tabular data.
What's New
v1.0.0rc1 May. 4, 2023
- Add trainer to operators:
timm, isc, transformers, clip
- Add GPU video decoder:
VPF
- All towhee pipelines can be converted into Nvidia Triton services.
v0.9.0 Dec. 2, 2022
- Added one video classification model:
Vis4mer
- Added three visual backbones:
MCProp,
RepLKNet,
Shunted Transformer
- Add two code search operators:
code_search.codebert,
code_search.unixcoder
- Add five image captioning operators:
image_captioning.expansionnet-v2,
image_captioning.magic,
image_captioning.clip_caption_reward,
image_captioning.blip,
image_captioning.clipcap
- Add five image-text embedding operators:
image_text_embedding.albef,
image_text_embedding.ru_clip,
image_text_embedding.japanese_clip,
image_text_embedding.taiyi,
image_text_embedding.slip
- Add one machine-translation operator:
machine_translation.opus_mt
- Add one filter-tiny-segments operator:
video-copy-detection.filter-tiny-segments
- Add an advanced tutorial for audio fingerprinting:
Audio Fingerprint II: Music Detection with Temporal Localization (increased accuracy from 84% to 90%)
v0.8.1 Sep. 30, 2022
v0.8.0 Aug. 16, 2022
- Towhee now supports generating an Nvidia Triton Server from a Towhee pipeline, with aditional support for GPU image decoding.
- Added one audio fingerprinting model:
nnfp
- Added two image embedding models:
RepMLP, WaveViT
v0.7.3 Jul. 27, 2022
- Added one multimodal (text/image) model:
CoCa.
- Added two video models for grounded situation recognition & repetitive action counting:
CoFormer,
TransRAC.
- Added two SoTA models for image tasks (image retrieval, image classification, etc.):
CVNet,
MaxViT
v0.7.1 Jul. 1, 2022
v0.7.0 Jun. 24, 2022
v0.6.1 May. 13, 2022
Getting started
Towhee requires Python 3.6+. You can install Towhee via pip
:
pip install towhee towhee.models
If you run into any pip-related install problems, please try to upgrade pip with pip install -U pip
.
Let's try your first Towhee pipeline. Below is an example for how to create a CLIP-based cross modal retrieval pipeline.
The example needs towhee 1.0.0, which can be installed with pip install towhee==1.0.0
, The latest usage documentation.
from glob import glob
from towhee import ops, pipe, DataCollection
p = (
pipe.input('file_name')
.map('file_name', 'img', ops.image_decode.cv2())
.map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='image'))
.map('vec', 'vec', ops.towhee.np_normalize())
.map(('vec', 'file_name'), (), ops.ann_insert.faiss_index('./faiss', 512))
.output()
)
for f_name in ['https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog1.png',
'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog2.png',
'https://raw.githubusercontent.com/towhee-io/towhee/main/assets/dog3.png']:
p(f_name)
del p
decode = ops.image_decode.cv2('rgb')
p = (
pipe.input('text')
.map('text', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch32', modality='text'))
.map('vec', 'vec', ops.towhee.np_normalize())
.map('vec', 'row', ops.ann_search.faiss_index('./faiss', 3))
.map('row', 'images', lambda x: [decode(item[2][0]) for item in x])
.output('text', 'images')
)
DataCollection(p('a cat')).show()
Learn more examples from the Towhee Examples.
Core Concepts
Towhee is composed of four main building blocks - Operators
, Pipelines
, DataCollection API
and Engine
.
-
Operators: An operator is a single building block of a neural data processing pipeline. Different implementations of operators are categorized by tasks, with each task having a standard interface. An operator can be a deep learning model, a data processing method, or a Python function.
-
Pipelines: A pipeline is composed of several operators interconnected in the form of a DAG (directed acyclic graph). This DAG can direct complex functionalities, such as embedding feature extraction, data tagging, and cross modal data analysis.
-
DataCollection API: A Pythonic and method-chaining style API for building custom pipelines. A pipeline defined by the DataColltion API can be run locally on a laptop for fast prototyping and then be converted to a docker image, with end-to-end optimizations, for production-ready environments.
-
Engine: The engine sits at Towhee's core. Given a pipeline, the engine will drive dataflow among individual operators, schedule tasks, and monitor compute resource usage (CPU/GPU/etc). We provide a basic engine within Towhee to run pipelines on a single-instance machine and a Triton-based engine for docker containers.
Contributing
Writing code is not the only way to contribute! Submitting issues, answering questions, and improving documentation are just some of the many ways you can help our growing community. Check out our contributing page for more information.
Special thanks goes to these folks for contributing to Towhee, either on Github, our Towhee Hub, or elsewhere:
Looking for a database to store and index your embedding vectors? Check out Milvus.