Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Model Serving made Efficient in the Cloud.
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.
Mosec requires Python 3.7 or above. Install the latest PyPI package for Linux x86_64 or macOS x86_64/ARM64 with:
pip install -U mosec
# or install with conda
conda install conda-forge::mosec
To build from the source code, install Rust and run the following command:
make package
You will get a mosec wheel file in the dist
folder.
We demonstrate how Mosec can help you easily host a pre-trained stable diffusion model as a service. You need to install diffusers and transformers as prerequisites:
pip install --upgrade diffusers[torch] transformers
Firstly, we import the libraries and set up a basic logger to better observe what happens.
from io import BytesIO
from typing import List
import torch # type: ignore
from diffusers import StableDiffusionPipeline # type: ignore
from mosec import Server, Worker, get_logger
from mosec.mixin import MsgpackMixin
logger = get_logger()
Then, we build an API for clients to query a text prompt and obtain an image based on the stable-diffusion-v1-5 model in just 3 steps.
Define your service as a class which inherits mosec.Worker
. Here we also inherit MsgpackMixin
to employ the msgpack serialization format(a).
Inside the __init__
method, initialize your model and put it onto the corresponding device. Optionally you can assign self.example
with some data to warm up(b) the model. Note that the data should be compatible with your handler's input format, which we detail next.
Override the forward
method to write your service handler(c), with the signature forward(self, data: Any | List[Any]) -> Any | List[Any]
. Receiving/returning a single item or a tuple depends on whether dynamic batching(d) is configured.
class StableDiffusion(MsgpackMixin, Worker):
def __init__(self):
self.pipe = StableDiffusionPipeline.from_pretrained(
"sd-legacy/stable-diffusion-v1-5", torch_dtype=torch.float16
)
self.pipe.enable_model_cpu_offload()
self.example = ["useless example prompt"] * 4 # warmup (batch_size=4)
def forward(self, data: List[str]) -> List[memoryview]:
logger.debug("generate images for %s", data)
res = self.pipe(data)
logger.debug("NSFW: %s", res[1])
images = []
for img in res[0]:
dummy_file = BytesIO()
img.save(dummy_file, format="JPEG")
images.append(dummy_file.getbuffer())
return images
[!NOTE]
(a) In this example we return an image in the binary format, which JSON does not support (unless encoded with base64 that makes the payload larger). Hence, msgpack suits our need better. If we do not inherit
MsgpackMixin
, JSON will be used by default. In other words, the protocol of the service request/response can be either msgpack, JSON, or any other format (check our mixins).(b) Warm-up usually helps to allocate GPU memory in advance. If the warm-up example is specified, the service will only be ready after the example is forwarded through the handler. However, if no example is given, the first request's latency is expected to be longer. The
example
should be set as a single item or a tuple depending on whatforward
expects to receive. Moreover, in the case where you want to warm up with multiple different examples, you may setmulti_examples
(demo here).(c) This example shows a single-stage service, where the
StableDiffusion
worker directly takes in client's prompt request and responds the image. Thus theforward
can be considered as a complete service handler. However, we can also design a multi-stage service with workers doing different jobs (e.g., downloading images, model inference, post-processing) in a pipeline. In this case, the whole pipeline is considered as the service handler, with the first worker taking in the request and the last worker sending out the response. The data flow between workers is done by inter-process communication.(d) Since dynamic batching is enabled in this example, the
forward
method will wishfully receive a list of string, e.g.,['a cute cat playing with a red ball', 'a man sitting in front of a computer', ...]
, aggregated from different clients for batch inference, improving the system throughput.
Finally, we append the worker to the server to construct a single-stage workflow (multiple stages can be pipelined to further boost the throughput, see this example), and specify the number of processes we want it to run in parallel (num=1
), and the maximum batch size (max_batch_size=4
, the maximum number of requests dynamic batching will accumulate before timeout; timeout is defined with the max_wait_time=10
in milliseconds, meaning the longest time Mosec waits until sending the batch to the Worker).
if __name__ == "__main__":
server = Server()
# 1) `num` specifies the number of processes that will be spawned to run in parallel.
# 2) By configuring the `max_batch_size` with the value > 1, the input data in your
# `forward` function will be a list (batch); otherwise, it's a single item.
server.append_worker(StableDiffusion, num=1, max_batch_size=4, max_wait_time=10)
server.run()
The above snippets are merged in our example file. You may directly run at the project root level. We first have a look at the command line arguments (explanations here):
python examples/stable_diffusion/server.py --help
Then let's start the server with debug logs:
python examples/stable_diffusion/server.py --log-level debug --timeout 30000
Open http://127.0.0.1:8000/openapi/swagger/
in your browser to get the OpenAPI doc.
And in another terminal, test it:
python examples/stable_diffusion/client.py --prompt "a cute cat playing with a red ball" --output cat.jpg --port 8000
You will get an image named "cat.jpg" in the current directory.
You can check the metrics:
curl http://127.0.0.1:8000/metrics
That's it! You have just hosted your stable-diffusion model as a service! 😉
More ready-to-use examples can be found in the Example section. It includes:
max_batch_size
and max_wait_time (millisecond)
are configured when you call append_worker
.max_batch_size
value won't cause the out-of-memory in GPU.max_wait_time
should be less than the batch inference time.max_batch_size
or when max_wait_time
has elapsed. The service will benefit from this feature when the traffic is high.mosec
installed, you can check the official image mosecorg/mosec
. For the complex use case, check out envd.mosec_service_batch_size_bucket
shows the batch size distribution.mosec_service_batch_duration_second_bucket
shows the duration of dynamic batching for each connection in each stage (starts from receiving the first task).mosec_service_process_duration_second_bucket
shows the duration of processing for each connection in each stage (including the IPC time but excluding the mosec_service_batch_duration_second_bucket
).mosec_service_remaining_task
shows the number of currently processing tasks.mosec_service_throughput
shows the service throughput.SIGINT
(CTRL+C
) or SIGTERM
(kill {PID}
) since it has the graceful shutdown logic.max_batch_size
and max_wait_time
for your inference service. The metrics will show the histograms of the real batch size and batch duration. Those are the key information to adjust these two parameters.serialize_ipc/deserialize_ipc
methods, so extremely large data might make the whole pipeline slow. The serialized data is passed to the next stage through rust by default, you could enable shared memory to potentially reduce the latency (ref RedisShmIPCMixin).serialize/deserialize
methods, which are used to decode the user request and encode the response. By default, both are using JSON. However, images and embeddings are not well supported by JSON. You can choose msgpack which is faster and binary compatible (ref Stable Diffusion).mosec
automatically adapts to user's protocol (e.g., HTTP/2) since v0.8.8.Here are some of the companies and individual users that are using Mosec:
If you find this software useful for your research, please consider citing
@software{yang2021mosec,
title = {{MOSEC: Model Serving made Efficient in the Cloud}},
author = {Yang, Keming and Liu, Zichen and Cheng, Philip},
url = {https://github.com/mosecorg/mosec},
year = {2021}
}
We welcome any kind of contribution. Please give us feedback by raising issues or discussing on Discord. You could also directly contribute your code and pull request!
To start develop, you can use envd to create an isolated and clean Python & Rust environment. Check the envd-docs or build.envd for more information.
FAQs
Model Serving made Efficient in the Cloud
We found that mosec demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.