Ximilar API Python Client
This Python 3.9+ Client library is lightweight wrapper for ximilar.com
.
Installation
PyPI - (https://pypi.org/project/ximilar-client/):
# we recommend to install ximilar-client to new virtualenv
pip install ximilar-client
Manual installation with latest changes:
1. Cloning the repo
git clone https://gitlab.com/ximilar-public/ximilar-client.git
2. Install it with pip to your virtualenv
pip install -e ximilar-client
This will install also urllib3, requests, tqdm and pytest library. This will not install python-opencv which is required if you want to upload local images to the Ximilar system. For more information about installing opencv on different systems we have a small page in our docs.
You will need to install one of opencv-python or opencv-contrib-python (or headless) manually.
Usage
First you need to register via app.ximilar.com
and obtain your API TOKEN
for communication with ximilar rest endpoints. You can obtain the token from the Ximilar App at your profile page.
After you obtain the token, the usage is quite straightforward. First, import this package and create specific rest client (reconition/vize, tagging, colors, search, ...). In following example we will create client for Ximilar Recognition Service
(vize.ai). For all other Ximilar Services as Tagging, Custom Object Detection you will need to contact tech@ximilar.com
first, so they will provide you access to the service:
from ximilar.client import RecognitionClient, DetectionClient
from ximilar.client import DominantColorProductClient, DominantColorGenericClient
from ximilar.client import FashionTaggingClient, GenericTaggingClient
app_client = RecognitionClient(token="__API_TOKEN__")
detect_client = DetectionClient(token="__API_TOKEN__")
...
Workspaces
With a new version of Ximilar App you are able to work also with workspaces. Workspaces are entities where all your task, labels and images live. Each user has by default workspace with name Default
(it will be used if you do not specify workspace when working with Image, Label, Task). However you can specify id of workspace in the constructor.
client = RecognitionClient(token="__API_TOKEN__", workspace='__UUID_OF_YOUR_WORKSPACE__')
client = DetectionClient(token="__API_TOKEN__", workspace='__UUID_OF_YOUR_WORKSPACE__')
Ximilar Recognition
This client allows you to work with Ximilar Recognition Service. With this client you are able to create classification or tagging tasks based on latest trends in machine learning and neural networks.
After creating client object you can for example load your existing task and call train:
task, status = client.get_task(task_id='__ID_TASK_')
task.train()
tasks, status = client.get_all_tasks()
task, status = client.create_task('__TASK_NAME__')
Task
Currently there are two types of task to create. User can select 'multi_class' (default) or 'multi_label'. See ximilar.docs for more info.
classification_task, status = client.create_task('__TASK_NAME__')
tagging_task, status = client.create_task('__TASK_NAME__', type='multi_label')
client.remove_task(task.id)
task.remove()
Classify
Suppose you want to use the task to predict the result on your images. Please, always try to send image bigger than 200px and lower than 600px for quality and speed:
result = task.classify([{'_url': '__URL_PATH_TO_IMG__'}, {'_file', '__LOCAL_FILE_PATH__'}, {'_base64': '__BASE64_DATA__'}])
best_label = result['records'][0]['best_label']
Labels
Labels are connected to the task. Depends which task you are working with (Tagging/multi_label or Categorization/multi_class) you can create Tag or Category labels. Working with the labels are pretty simple:
existing_label, status = client.get_label('__ID_LABEL__')
label, status = client.create_label(name='__NEW_LABEL_NAME__')
task.add_label(label.id)
label, status = client.create_label(name='__NEW_LABEL_NAME__', label_type='tag')
labels, status = task.get_labels()
for label in labels:
print(label.id, label.name)
label, status = task.get_label_by_name(name='__LABEL_NAME__')
task.detach_label(label.id)
client.remove_label(label.id)
label.detach_image(image.id)
labels, status = client.get_labels_by_substring('__LABEL_NAME__')
Working with training images
Image is main entity in Ximilar system. Every image can have multiple labels (Recognition service) or multiple objects (Detection service).
images, next_page, status = label.get_training_images()
while images:
for image in images:
print(str(image.id))
if not next_page:
break
images, next_page, status = label.get_training_images(next_page)
image, status = client.get_image(image_id=image.id)
image.add_label(label.id)
image.detach_label(label.id)
client.remove_image(image.id)
Let's say you want to upload a training image and add several labels to this image:
images, status = client.upload_images([{'_url': '__URL_PATH_TO_IMAGE__', 'labels': [label.id for label in labels], "meta_data": {"field": "key"}},
{'_file': '__LOCAL_FILE_PATH__', 'labels': [label.id for label in labels]},
{'_base64': '__BASE64_DATA__', 'labels': [label.id for label in labels]}])
images[0].add_label("__SOME_LABEL_ID__")
Upload image without resizing it (for example Custom Object Detection requires high resolution images):
images, status = client.upload_images([{'_url': '__URL_PATH_TO_IMAGE__', "noresize": True}])
Every image can have some meta data stored:
image.add_meta_data({"__KEY_1__": "value", "__KEY_2__": {"THIS CAB BE":"COMPLEX"}})
image.clear_meta_data()
Every image can be marked with test flag (for evaluation on independent test dataset only):
image.set_test(True)
Every image can be marked as real (default) or product. Product image should be images where is dominant one object on nice solid background. We can do more augmentations on these images.
image.set_real(False)
Ximilar Flows
The client is able to get flow of the json or process images/records by the flow.
from ximilar.client import FlowsClient
client = FlowsClient("__API_TOKEN__")
flow, _ = client.get_flow("__FLOW_ID__")
client.process_flow(flow.id, records)
flow.proces(records)
Ximilar Object Detection
Ximilar Object Detection is service which will help you find exact location (Bounding Box/Object with four coordinates xmin, ymin, xmax, ymax).
In similar way as Ximilar Recognition, here we also have Tasks, Labels and Images. However one more entity called Object is present in Ximilar Object Detection.
First you need to create/get Detection Task:
client = DetectionClient("__API_TOKEN__")
detection_task, status = client.create_task("__DETECTION_TASK_NAME__")
detection_task, status = client.get_task(task.id)
Second you need to create Detection Label and connect it to the task:
detection_label, status = client.create_label("__DETECTION_LABEL_NAME__")
detection_label, status = client.get_label("__DETECTION_LABEL_ID__")
detection_task.add_label(detection_label.id)
Lastly you need to create Objects/Bounding box annotations of some type (Label) on the images:
image, status = client.get_image("__IMAGE_ID__")
d_object, status = client.create_object("__DETECTION_LABEL_ID__", "__IMAGE_ID__", [xmin, ymin, xmax, ymax])
d_object, status = client.get_object(d_object.id)
d_objects, status = client.get_objects_of_image("__IMAGE_ID__")
Then you can create your task:
detection_task.train()
Removing entities is same as in recognition client:
client.remove_task("__DETECTION_TASK_ID__")
client.remove_label("__DETECTION_LABEL_ID__")
client.remove_object("__DETECTION_OBJECT_ID__")
client.remove_image("__IMAGE_ID__")
task.remove()
label.remove()
object1 = client.get_object("__DETECTION_OBJECT_ID__")
object1.remove()
image.remove()
Getting Detection Result:
result = detection_task.detect([{"_url": "__URL_PATH_TO_IMAGE__"}])
Extracting object from image:
image, status = client.get_image("59f7240d-ca86-436b-b0cd-30f4b94705df")
object1, status = client.get_object("__DETECTION_OBJECT_ID__")
extracted_image_record = image.extract_object_data(object1.data)
Speeding it up with Parallel Processing
If you are uploading/classifying thousands of images and really need to speed it up, then you can use method parallel_records_processing:
result = client.parallel_records_processing([{"_url": image} for image in images], method=task.classify, output=True, max_workers=3)
result = client.parallel_records_processing([{"_url": image} for image in images], method=task.detect, output=True, max_workers=3)
result = client.parallel_records_processing([{"_url": image, "labels": ["__LABEL_ID_1__"]} for image in images], method=client.upload_images, output=True)
This method works only for getting result for classification, tagging, detection, color extraction or uploading images (All methods which use json records as input).
Ximilar Visual Search
Service for visual fashion search. For more information see docs.ximilar.com
from ximilar.client.visual import SimilarityFashionClient
client = SimilarityFashionClient(token='__API_TOKEN__', collection_id='__COLLECTION_ID__')
client.insert([{"_id": "__IMAGE_ID__", "product_id": "__PRODUCT_ID__", "_url": "__URL_PATH_TO_IMAGE__"}])
result = client.detect([{"_url": "__URL_PATH_TO_IMAGE__"}])
result = client.search([{"_url": "__URL_PATH_TO_IMAGE__"}])
Ximilar Dominant Colors
You can select the service for extracting dominant colors by type of your image. If the image is from Product/Fashion domain, which means that product is tipically on some solid background then us DominanColorProductClient
.
from ximilar.client import DominantColorProductClient, DominantColorGenericClient
product_client = DominantColorProductClient(token="__API_TOKEN__")
generic_client = DominantColorGenericClient(token="__API_TOKEN__")
result = product_client.dominantcolor([{"_url": "__URL_PATH_TO_IMAGE__"}])
print(result['records'][0]['_dominant_colors'])
Ximilar Generic and Fashion Tagging
Tagging contains two clients in similar way as DominanColors do.
from ximilar.client import FashionTaggingClient, GenericTaggingClient
fashion_client = FashionTaggingClient(token="__API_TOKEN__")
generic_client = GenericTaggingClient(token="__API_TOKEN__")
result = generic_client.tags([{"_url": "__URL_PATH_TO_IMAGE__"}])
print(result['records'][0]['_tags'])
result = fashion_client.tags([{"_url": "__URL_PATH_TO_IMAGE__"}])
print(result['records'][0]['_tags'])
result = fashion_client.meta_tags([{"_url": "__URL_PATH_TO_IMAGE__"}])
print(result['records'][0]['_tags_meta_simple'])
Ximilar Photo and Product similarity
These two services provides visual search (similarity search) for generic (stock) photos or products (e-commerce, fashion, ...).
When initializing client you need to specify both token
and your collection_id
that we created for you.
from ximilar.client.search import SimilarityPhotosClient, SimilarityProductsClient
client = SimilarityPhotosClient(token='__API_TOKEN__', collection_id='__COLLECTION_ID__')
client = SimilarityProductsClient(token='__API_TOKEN__', collection_id='__COLLECTION_ID__')
result = client.random(count=7, fields_to_return=['_id', '_url'])
result = client.search({'_id': '__ITEM_ID__'}, k=10)
result = client.search({'_url': '__URL_PATH_TO_IMAGE__'}, k=5)
result = client.search({'_id': '__ITEM_ID__'}, fields_to_return=['_id', '_url'],
filter={
'meta-category-x': { '$in': ['__SOME_VALUE_1__', '__SOME_VALUE_2__']},
'some-field': '__SOME_VALUE__'
})
All crud operations:
result = client.get_records([{'_id': '__ITEM_ID__'}, {'_id': '__ITEM_ID__'}])
result = client.insert([{'_id': '__ITEM_ID__', '_url': '__URL_PATH_TO_IMAGE__',
'meta-category-x': '__CATEGORY_OF_ITEM__',
'meta-info-y': '__ANOTHER_META_INFO__'}])
result = client.remove([{'_id': '__ITEM_ID__'}])
result = client.update([{'_id': '__ITEM_ID__', 'some-additional-field': '__VALUE__'}])
Custom Similarity
This service let you train your custom image similarity model.
Creating entities is similar to recognition or detection service.
from ximilar.client.similarity import CustomSimilarityClient
client = CustomSimilarityClient("__API__TOKEN__")
tasks, _ = client.get_all_tasks()
task, _ = client.create_task("__NAME__", "__DESCRIPTION__")
type1, _ = client.create_type("__NAME__", "__DESCRIPTION__")
group, _ = client.create_group("__NAME__", "__DESCRIPTION__", type1.id)
Add/Remove types to/from task:
task.add_type(type1.id)
task.remove_type(type1.id)
Add/Remove images to/from group:
group.add_images(["__IMAGE_ID_1__"])
group.remove_images(["__IMAGE_ID_1__"])
group.refresh()
Add/Remove groups to/from group:
group.add_groups(["__GROUP_ID_1__"])
group.remove_groups(["__GROUP_ID_1__"])
group.refresh()
Set unset group as test (test flag is for evaluation dataset):
group.set_test(True)
group.refresh()
Searching groups with name:
client.get_all_groups_by_name("__NAME__")
Tools
In our tools
folder you can find some useful scripts for:
uploader.py
for uploading all images from specific folderdata_saver.py
for saving entire recognition and detection workspace including imagesdata_wiper.py
for removing entire workspace and all your data in workspacedetection_cutter.py
cutting objects from images