Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
The data structure for multimodal data
Note The README you're currently viewing is for DocArray>0.30, which introduces some significant changes from DocArray 0.21. If you wish to continue using the older DocArray <=0.21, ensure you install it via
pip install docarray==0.21
. Refer to its codebase, documentation, and its hot-fixes branch for more information.
DocArray is a Python library expertly crafted for the representation, transmission, storage, and retrieval of multimodal data. Tailored for the development of multimodal AI applications, its design guarantees seamless integration with the extensive Python and machine learning ecosystems. As of January 2022, DocArray is openly distributed under the Apache License 2.0 and currently enjoys the status of a sandbox project within the LF AI & Data Foundation.
To install DocArray from the CLI, run the following command:
pip install -U docarray
Note To use DocArray <=0.21, make sure you install via
pip install docarray==0.21
and check out its codebase and docs and its hot-fixes branch.
New to DocArray? Depending on your use case and background, there are multiple ways to learn about DocArray:
DocArray empowers you to represent your data in a manner that is inherently attuned to machine learning.
This is particularly beneficial for various scenarios:
:bulb: Familiar with Pydantic? You'll be pleased to learn that DocArray is not only constructed atop Pydantic but also maintains complete compatibility with it! Furthermore, we have a specific section dedicated to your needs!
In essence, DocArray facilitates data representation in a way that mirrors Python dataclasses, with machine learning being an integral component:
from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
import torch
# Define your data model
class MyDocument(BaseDoc):
description: str
image_url: ImageUrl # could also be VideoUrl, AudioUrl, etc.
image_tensor: TorchTensor[1704, 2272, 3] # you can express tensor shapes!
# Stack multiple documents in a Document Vector
from docarray import DocVec
vec = DocVec[MyDocument](
[
MyDocument(
description="A cat",
image_url="https://example.com/cat.jpg",
image_tensor=torch.rand(1704, 2272, 3),
),
]
* 10
)
print(vec.image_tensor.shape) # (10, 1704, 2272, 3)
Let's take a closer look at how you can represent your data with DocArray:
from docarray import BaseDoc
from docarray.typing import TorchTensor, ImageUrl
from typing import Optional
import torch
# Define your data model
class MyDocument(BaseDoc):
description: str
image_url: ImageUrl # could also be VideoUrl, AudioUrl, etc.
image_tensor: Optional[
TorchTensor[1704, 2272, 3]
] = None # could also be NdArray or TensorflowTensor
embedding: Optional[TorchTensor] = None
So not only can you define the types of your data, you can even specify the shape of your tensors!
# Create a document
doc = MyDocument(
description="This is a photo of a mountain",
image_url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
)
# Load image tensor from URL
doc.image_tensor = doc.image_url.load()
# Compute embedding with any model of your choice
def clip_image_encoder(image_tensor: TorchTensor) -> TorchTensor: # dummy function
return torch.rand(512)
doc.embedding = clip_image_encoder(doc.image_tensor)
print(doc.embedding.shape) # torch.Size([512])
Of course, you can compose Documents into a nested structure:
from docarray import BaseDoc
from docarray.documents import ImageDoc, TextDoc
import numpy as np
class MultiModalDocument(BaseDoc):
image_doc: ImageDoc
text_doc: TextDoc
doc = MultiModalDocument(
image_doc=ImageDoc(tensor=np.zeros((3, 224, 224))), text_doc=TextDoc(text='hi!')
)
You rarely work with a single data point at a time, especially in machine learning applications. That's why you can easily collect multiple Documents
:
Documents
When building or interacting with an ML system, usually you want to process multiple Documents (data points) at once.
DocArray offers two data structures for this:
DocVec
: A vector of Documents
. All tensors in the documents are stacked into a single tensor. Perfect for batch processing and use inside of ML models.DocList
: A list of Documents
. All tensors in the documents are kept as-is. Perfect for streaming, re-ranking, and shuffling of data.Let's take a look at them, starting with DocVec
:
from docarray import DocVec, BaseDoc
from docarray.typing import AnyTensor, ImageUrl
import numpy as np
class Image(BaseDoc):
url: ImageUrl
tensor: AnyTensor # this allows torch, numpy, and tensor flow tensors
vec = DocVec[Image]( # the DocVec is parametrized by your personal schema!
[
Image(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
)
for _ in range(100)
]
)
In the code snippet above, DocVec
is parametrized by the type of document you want to use with it: DocVec[Image]
.
This may look weird at first, but we're confident that you'll get used to it quickly! Besides, it lets us do some cool things, like having bulk access to the fields that you defined in your document:
tensor = vec.tensor # gets all the tensors in the DocVec
print(tensor.shape) # which are stacked up into a single tensor!
print(vec.url) # you can bulk access any other field, too
The second data structure, DocList
, works in a similar way:
from docarray import DocList
dl = DocList[Image]( # the DocList is parametrized by your personal schema!
[
Image(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
)
for _ in range(100)
]
)
You can still bulk access the fields of your document:
tensors = dl.tensor # gets all the tensors in the DocList
print(type(tensors)) # as a list of tensors
print(dl.url) # you can bulk access any other field, too
And you can insert, remove, and append documents to your DocList
:
# append
dl.append(
Image(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
)
)
# delete
del dl[0]
# insert
dl.insert(
0,
Image(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
),
)
And you can seamlessly switch between DocVec
and DocList
:
vec_2 = dl.to_doc_vec()
assert isinstance(vec_2, DocVec)
dl_2 = vec_2.to_doc_list()
assert isinstance(dl_2, DocList)
DocArray facilitates the transmission of your data in a manner inherently compatible with machine learning.
This includes native support for Protobuf and gRPC, along with HTTP and serialization to JSON, JSONSchema, Base64, and Bytes.
This feature proves beneficial for several scenarios:
:bulb: Are you familiar with FastAPI? You'll be delighted to learn that DocArray maintains full compatibility with FastAPI! Plus, we have a dedicated section specifically for you!
When it comes to data transmission, serialization is a crucial step. Let's delve into how DocArray streamlines this process:
from docarray import BaseDoc
from docarray.typing import ImageTorchTensor
import torch
# model your data
class MyDocument(BaseDoc):
description: str
image: ImageTorchTensor[3, 224, 224]
# create a Document
doc = MyDocument(
description="This is a description",
image=torch.zeros((3, 224, 224)),
)
# serialize it!
proto = doc.to_protobuf()
bytes_ = doc.to_bytes()
json = doc.json()
# deserialize it!
doc_2 = MyDocument.from_protobuf(proto)
doc_4 = MyDocument.from_bytes(bytes_)
doc_5 = MyDocument.parse_raw(json)
Of course, serialization is not all you need. So check out how DocArray integrates with Jina and FastAPI.
After modeling and possibly distributing your data, you'll typically want to store it somewhere. That's where DocArray steps in!
Document Stores provide a seamless way to, as the name suggests, store your Documents. Be it locally or remotely, you can do it all through the same user interface:
The Document Store interface lets you push and pull Documents to and from multiple data sources, all with the same user interface.
For example, let's see how that works with on-disk storage:
from docarray import BaseDoc, DocList
class SimpleDoc(BaseDoc):
text: str
docs = DocList[SimpleDoc]([SimpleDoc(text=f'doc {i}') for i in range(8)])
docs.push('file://simple_docs')
docs_pull = DocList[SimpleDoc].pull('file://simple_docs')
Document Indexes let you index your Documents in a vector database for efficient similarity-based retrieval.
This is useful for:
Currently, Document Indexes support Weaviate, Qdrant, ElasticSearch, Redis, and HNSWLib, with more to come!
The Document Index interface lets you index and retrieve Documents from multiple vector databases, all with the same user interface.
It supports ANN vector search, text search, filtering, and hybrid search.
from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np
from docarray.typing import ImageUrl, ImageTensor, NdArray
class ImageDoc(BaseDoc):
url: ImageUrl
tensor: ImageTensor
embedding: NdArray[128]
# create some data
dl = DocList[ImageDoc](
[
ImageDoc(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
embedding=np.random.random((128,)),
)
for _ in range(100)
]
)
# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index')
# index your data
index.index(dl)
# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')
Depending on your background and use case, there are different ways for you to understand DocArray.
If you are using DocArray version 0.30.0 or lower, you will be familiar with its dataclass API.
DocArray >=0.30 is that idea, taken seriously. Every document is created through a dataclass-like interface, courtesy of Pydantic.
This gives the following advantages:
You may also be familiar with our old Document Stores for vector DB integration. They are now called Document Indexes and offer the following improvements (see here for the new API):
For now, Document Indexes support Weaviate, Qdrant, ElasticSearch, Redis, Exact Nearest Neighbour search and HNSWLib, with more to come.
If you come from Pydantic, you can see DocArray documents as juiced up Pydantic models, and DocArray as a collection of goodies around them.
More specifically, we set out to make Pydantic fit for the ML world - not by replacing it, but by building on top of it!
This means you get the following benefits:
.load()
a URL to image tensor, TextUrl can load and tokenize text documents, etc.The most obvious advantage here is first-class support for ML centric data, such as {Torch, TF, ...}Tensor
, Embedding
, etc.
This includes handy features such as validating the shape of a tensor:
from docarray import BaseDoc
from docarray.typing import TorchTensor
import torch
class MyDoc(BaseDoc):
tensor: TorchTensor[3, 224, 224]
doc = MyDoc(tensor=torch.zeros(3, 224, 224)) # works
doc = MyDoc(tensor=torch.zeros(224, 224, 3)) # works by reshaping
try:
doc = MyDoc(tensor=torch.zeros(224)) # fails validation
except Exception as e:
print(e)
# tensor
# Cannot reshape tensor of shape (224,) to shape (3, 224, 224) (type=value_error)
class Image(BaseDoc):
tensor: TorchTensor[3, 'x', 'x']
Image(tensor=torch.zeros(3, 224, 224)) # works
try:
Image(
tensor=torch.zeros(3, 64, 128)
) # fails validation because second dimension does not match third
except Exception as e:
print()
try:
Image(
tensor=torch.zeros(4, 224, 224)
) # fails validation because of the first dimension
except Exception as e:
print(e)
# Tensor shape mismatch. Expected(3, 'x', 'x'), got(4, 224, 224)(type=value_error)
try:
Image(
tensor=torch.zeros(3, 64)
) # fails validation because it does not have enough dimensions
except Exception as e:
print(e)
# Tensor shape mismatch. Expected (3, 'x', 'x'), got (3, 64) (type=value_error)
If you come from PyTorch, you can see DocArray mainly as a way of organizing your data as it flows through your model.
It offers you several advantages:
DocArray can be used directly inside ML models to handle and represent multimodaldata.
This allows you to reason about your data using DocArray's abstractions deep inside of nn.Module
,
and provides a FastAPI-compatible schema that eases the transition between model training and model serving.
To see the effect of this, let's first observe a vanilla PyTorch implementation of a tri-modal ML model:
import torch
from torch import nn
def encoder(x):
return torch.rand(512)
class MyMultiModalModel(nn.Module):
def __init__(self):
super().__init__()
self.audio_encoder = encoder()
self.image_encoder = encoder()
self.text_encoder = encoder()
def forward(self, text_1, text_2, image_1, image_2, audio_1, audio_2):
embedding_text_1 = self.text_encoder(text_1)
embedding_text_2 = self.text_encoder(text_2)
embedding_image_1 = self.image_encoder(image_1)
embedding_image_2 = self.image_encoder(image_2)
embedding_audio_1 = self.image_encoder(audio_1)
embedding_audio_2 = self.image_encoder(audio_2)
return (
embedding_text_1,
embedding_text_2,
embedding_image_1,
embedding_image_2,
embedding_audio_1,
embedding_audio_2,
)
Not very easy on the eyes if you ask us. And even worse, if you need to add one more modality you have to touch every part of your code base, changing the forward()
return type and making a whole lot of changes downstream from that.
So, now let's see what the same code looks like with DocArray:
from docarray import DocList, BaseDoc
from docarray.documents import ImageDoc, TextDoc, AudioDoc
from docarray.typing import TorchTensor
from torch import nn
import torch
def encoder(x):
return torch.rand(512)
class Podcast(BaseDoc):
text: TextDoc
image: ImageDoc
audio: AudioDoc
class PairPodcast(BaseDoc):
left: Podcast
right: Podcast
class MyPodcastModel(nn.Module):
def __init__(self):
super().__init__()
self.audio_encoder = encoder()
self.image_encoder = encoder()
self.text_encoder = encoder()
def forward_podcast(self, docs: DocList[Podcast]) -> DocList[Podcast]:
docs.audio.embedding = self.audio_encoder(docs.audio.tensor)
docs.text.embedding = self.text_encoder(docs.text.tensor)
docs.image.embedding = self.image_encoder(docs.image.tensor)
return docs
def forward(self, docs: DocList[PairPodcast]) -> DocList[PairPodcast]:
docs.left = self.forward_podcast(docs.left)
docs.right = self.forward_podcast(docs.right)
return docs
Looks much better, doesn't it? You instantly win in code readability and maintainability. And for the same price you can turn your PyTorch model into a FastAPI app and reuse your Document schema definition (see below). Everything is handled in a pythonic manner by relying on type hints.
Like the PyTorch approach, you can also use DocArray with TensorFlow to handle and represent multimodal data inside your ML model.
First off, to use DocArray with TensorFlow we first need to install it as follows:
pip install tensorflow==2.12.0
pip install protobuf==3.19.0
Compared to using DocArray with PyTorch, there is one main difference when using it with TensorFlow:
While DocArray's TorchTensor
is a subclass of torch.Tensor
, this is not the case for the TensorFlowTensor
: Due to some technical limitations of tf.Tensor
, DocArray's TensorFlowTensor
is not a subclass of tf.Tensor
but rather stores a tf.Tensor
in its .tensor
attribute.
How does this affect you? Whenever you want to access the tensor data to, let's say, do operations with it or hand it to your ML model, instead of handing over your TensorFlowTensor
instance, you need to access its .tensor
attribute.
This would look like the following:
from typing import Optional
from docarray import DocList, BaseDoc
import tensorflow as tf
class Podcast(BaseDoc):
audio_tensor: Optional[AudioTensorFlowTensor] = None
embedding: Optional[AudioTensorFlowTensor] = None
class MyPodcastModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.audio_encoder = AudioEncoder()
def call(self, inputs: DocList[Podcast]) -> DocList[Podcast]:
inputs.audio_tensor.embedding = self.audio_encoder(
inputs.audio_tensor.tensor
) # access audio_tensor's .tensor attribute
return inputs
Documents are Pydantic Models (with a twist), and as such they are fully compatible with FastAPI!
But why should you use them, and not the Pydantic models you already know and love? Good question!
And to seal the deal, let us show you how easily documents slot into your FastAPI app:
import numpy as np
from fastapi import FastAPI
from docarray.base_doc import DocArrayResponse
from docarray import BaseDoc
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor
class InputDoc(BaseDoc):
img: ImageDoc
text: str
class OutputDoc(BaseDoc):
embedding_clip: NdArray
embedding_bert: NdArray
app = FastAPI()
def model_img(img: ImageTensor) -> NdArray:
return np.zeros((100, 1))
def model_text(text: str) -> NdArray:
return np.zeros((100, 1))
@app.post("/embed/", response_model=OutputDoc, response_class=DocArrayResponse)
async def create_item(doc: InputDoc) -> OutputDoc:
doc = OutputDoc(
embedding_clip=model_img(doc.img.tensor), embedding_bert=model_text(doc.text)
)
return doc
input_doc = InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))
async with AsyncClient(app=app, base_url="http://test") as ac:
response = await ac.post("/embed/", data=input_doc.json())
Just like a vanilla Pydantic model!
Jina has adopted docarray as their library for representing and serializing Documents.
Jina allows to serve models and services that are built with DocArray allowing you to serve and scale these applications making full use of DocArray's serialization capabilites.
import numpy as np
from jina import Deployment, Executor, requests
from docarray import BaseDoc, DocList
from docarray.documents import ImageDoc
from docarray.typing import NdArray, ImageTensor
class InputDoc(BaseDoc):
img: ImageDoc
text: str
class OutputDoc(BaseDoc):
embedding_clip: NdArray
embedding_bert: NdArray
def model_img(img: ImageTensor) -> NdArray:
return np.zeros((100, 1))
def model_text(text: str) -> NdArray:
return np.zeros((100, 1))
class MyEmbeddingExecutor(Executor):
@requests(on='/embed')
def encode(self, docs: DocList[InputDoc], **kwargs) -> DocList[OutputDoc]:
ret = DocList[OutputDoc]()
for doc in docs:
output = OutputDoc(
embedding_clip=model_img(doc.img.tensor),
embedding_bert=model_text(doc.text),
)
ret.append(output)
return ret
with Deployment(
protocols=['grpc', 'http'], ports=[12345, 12346], uses=MyEmbeddingExecutor
) as dep:
resp = dep.post(
on='/embed',
inputs=DocList[InputDoc](
[InputDoc(text='', img=ImageDoc(tensor=np.random.random((3, 224, 224))))]
),
return_type=DocList[OutputDoc],
)
print(resp)
If you came across DocArray as a universal vector database client, you can best think of it as a new kind of ORM for vector databases. DocArray's job is to take multimodal, nested and domain-specific data and to map it to a vector database, store it there, and thus make it searchable:
from docarray import DocList, BaseDoc
from docarray.index import HnswDocumentIndex
import numpy as np
from docarray.typing import ImageUrl, ImageTensor, NdArray
class ImageDoc(BaseDoc):
url: ImageUrl
tensor: ImageTensor
embedding: NdArray[128]
# create some data
dl = DocList[ImageDoc](
[
ImageDoc(
url="https://upload.wikimedia.org/wikipedia/commons/2/2f/Alpamayo.jpg",
tensor=np.zeros((3, 224, 224)),
embedding=np.random.random((128,)),
)
for _ in range(100)
]
)
# create a Document Index
index = HnswDocumentIndex[ImageDoc](work_dir='/tmp/test_index2')
# index your data
index.index(dl)
# find similar Documents
query = dl[0]
results, scores = index.find(query, limit=10, search_field='embedding')
Currently, DocArray supports the following vector databases:
An integration of OpenSearch is currently in progress.
Of course this is only one of the things that DocArray can do, so we encourage you to check out the rest of this readme!
With DocArray, you can connect external data to LLMs through Langchain. DocArray gives you the freedom to establish flexible document schemas and choose from different backends for document storage. After creating your document index, you can connect it to your Langchain app using DocArrayRetriever.
Install Langchain via:
pip install langchain
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
# Define a document schema
class MovieDoc(BaseDoc):
title: str
description: str
year: int
embedding: NdArray[1536]
movies = [
{"title": "#1 title", "description": "#1 description", "year": 1999},
{"title": "#2 title", "description": "#2 description", "year": 2001},
]
# Embed `description` and create documents
docs = DocList[MovieDoc](
MovieDoc(embedding=embeddings.embed_query(movie["description"]), **movie)
for movie in movies
)
from docarray.index import (
InMemoryExactNNIndex,
HnswDocumentIndex,
WeaviateDocumentIndex,
QdrantDocumentIndex,
ElasticDocIndex,
RedisDocumentIndex,
)
# Select a suitable backend and initialize it with data
db = InMemoryExactNNIndex[MovieDoc](docs)
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.retrievers import DocArrayRetriever
# Create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="embedding",
content_field="description",
)
# Use the retriever in your chain
model = ChatOpenAI()
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
Alternatively, you can use built-in vector stores. Langchain supports two vector stores: DocArrayInMemorySearch and DocArrayHnswSearch. Both are user-friendly and are best suited to small to medium-sized datasets.
DocArray is a trademark of LF AI Projects, LLC
FAQs
The data structure for multimodal data
We found that docarray demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.