Table of contents
Deepomatic Remote Procedure Call
Deepomatic Remote Procedure Call.
This remote procedure call has been made to help you interacting with our on-premises inference service.
You might also want to use our command line interface deepomatic-cli.
Installation
Online
pip3 install deepomatic-rpc
Offline
On a machine with internet access you will need to download the package and its dependencies with the command below:
mkdir deepomatic-rpc
pip3 download --platform any --only-binary=:all: -d ./deepomatic-rpc deepomatic-rpc
Then save the deepomatic-rpc
directory on the storage device of your choice.
Now retrieve this directory on the offline machine and install the package:
pip3 install --no-index --find-links ./deepomatic-rpc ./deepomatic-rpc/deepomatic_rpc-*-py3-none-any.whl
Usage
Getting started
Instanciate client and queues
from deepomatic.rpc.client import Client
command_queue_name = 'my_command_queue'
recognition_version_id = 123
amqp_url = 'amqp://myuser:mypassword@localhost:5672/myvhost'
client = Client(amqp_url)
command_queue = client.new_queue(command_queue_name)
response_queue, consumer = client.new_consuming_queue()
Send recognition request
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc.response import wait
from deepomatic.rpc.helpers.v07_proto import create_images_input_mix, create_recognition_command_mix
command_mix = create_recognition_command_mix(recognition_version_id, max_predictions=100, show_discarded=False)
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode())
input_mix = create_images_input_mix([image_input])
correlation_id = client.command(command_queue_name, response_queue.name, command_mix, input_mix)
response = consumer.get(correlation_id, timeout=5)
labels = response.get_labelled_output()
predicted = labels.predicted[0]
print("Predicted label {} with score {}".format(predicted.label_name, predicted.score))
Stream and cleanup
When you are done with a stream you should cleanup your consuming queues.
- If your program stops right after, the consumer get cancelled and the queue will automatically be removed after 2 hours of inactivity (only if the queue is a uniq temporary queue).
- If your program is a long running job, after 2 hours of inactivity the queue might be removed and the consumer cancelled by the broker, but the client might consider redeclaring both in case of a broker error.
Thus calling client.remove_consuming_queue()
remove the queue and makes sure the consumer is cancelled and not redeclared later:
client.remove_consuming_queue(response_queue, consumer)
You might also want to remove a queue without consumer using:
client.remove_queue(queue)
Also instead of using new_consuming_queue()
with no queue_name parameter and remove_consuming_queue()
you might want to use the contextmanager version:
with client.tmp_consuming_queue() as (response_queue, consumer):
If you don't care about the response queue and consumer, we provide a high level class RPCStream
.
By default it saves all correlation_ids so that you can call get_next_response()
to get responses in the same order that you pushed the requests:
from deepomatic.rpc.helpers.proto import create_v07_images_command
serialized_buffer = create_v07_images_command([image_input], command_mix)
with client.new_stream(command_queue_name) as stream:
stream.send_binary(serialized_buffer)
response = stream.get_next_response(timeout=1)
Also you might want to handle response order by yourself, in this case you can create the stream in the following way:
with client.new_stream(command_queue_name, keep_response_order=False):
correlation_id = stream.send_binary(serialized_buffer)
response = stream.consumer.get(correlation_id, timeout=1)
IMPORTANT: If you don't use the with statement, you will
have to call stream.close()
at the end to clean consumer and response queue.
Advanced
Shortcuts
- You can avoid calling
create_images_input_mix
and directly sending the image_input list via the method client.v07_images_command
which will call internally create_images_input_mix
:
correlation_id = client.v07_images_command(command_queue_name, response_queue.name, [image_input], command_mix)
- Create a workflow command mix. The recognition_version_id is deduced but the command queue name must match the recognition in the workflows.json.
Note that it doesn't allow to specify
show_discarded
or max_predictions
:
from deepomatic.rpc.helpers.v07_proto import create_workflow_command_mix
command_mix = create_workflow_command_mix()
- Create an inference command mix; the response will be a raw tensor :
from deepomatic.rpc.helpers.v07_proto import create_inference_command_mix
output_tensors = ['prod']
command_mix = create_inference_command_mix(output_tensors)
- Wait multiples correlation ids at once:
from deepomatic.rpc.response import wait_responses
responses, pending = wait_responses(consumer, correlation_ids, timeout=10)
print(responses)
print(pending)
Image input examples
- Create an image input with a bounding box:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc import BBox
bbox = BBox(xmin=0.3, xmax=0.8, ymin=0.1, ymax=0.9)
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode(),
bbox=bbox)
- Create an image input with a polygon selection:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc import Point
polygon = [Point(x=0.1, y=0.1), Point(x=0.9, y=0.1), Point(x=0.5, y=0.9)]
image_input = v07_ImageInput(source='https://static.wamiz.fr/images/animaux/chats/large/bengal.jpg'.encode(),
polygon=polygon)
- Create an image input from a file on the disk:
from deepomatic.rpc import v07_ImageInput
from deepomatic.rpc.helpers.proto import binary_source_from_img_file
binary_content = binary_source_from_img_file(filename)
image_input = v07_ImageInput(source=binary_content)