Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

pipeline-ai

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

pipeline-ai

Pipelines for machine learning workloads.

  • 2.8.0
  • PyPI
  • Socket score

Maintainers
1

Pipeline SDK Version Size Downloads License Discord

Created by mystic.ai

Try premade models for free that have been made using this library: https://www.mystic.ai/explore

Table of Contents

About

Pipeline is a python library that provides a simple way to construct computational flows for AI/ML models. The library is suitable for both development and production environments supporting inference and training/finetuning. This library is also a direct interface to Mystic which provides a compute engine to run pipelines at scale and on enterprise GPUs. This SDK can also be used with Pipeline Core on a private hosted cluster.

The syntax used for defining AI/ML pipelines shares some similarities in syntax to sessions in Tensorflow v1, and Flows found in Prefect.

Installation and quickstart

To install pipeline run:

pip install pipeline-ai

To create a new pipeline navigate to the directory you want to create the pipeline in and run:

pipeline container init -n quickstart

This will create two files in the directory:

  • pipeline.yaml - The configuration file for the container to run the pipeline.
  • new_pipeline.py - The python file to populate with your pipeline.

Models

Below are some popular models that have been premade by the community on Mystic. You can find more models in the explore section of Mystic, and the source code for these models is also referenced in the table.

ModelCategoryDescriptionSource
meta/llama2-7BLLMA 7B parameter LLM created by Meta (vllm accelerated)source
meta/llama2-13BLLMA 13B parameter LLM created by Meta (vllm accelerated)source
meta/llama2-70BLLMA 70B parameter LLM created by Meta (vllm accelerated)source
runwayml/stable-diffusion-1.5VisionText -> Imagesource
stabilityai/stable-diffusion-xl-refiner-1.0VisionSDXL Text -> Imagesource
matthew/e5_large-v2LLMText embeddingsource
matthew/musicgen_largeAudioMusic generationsource
matthew/blipVisionImage captioningsource

Example and tutorials

TutorialDescription
Entity objectsUse entity objects to persist values and store things
Cold start optimisationsPremade functions to do heavy tasks seperately
Input/output typesDefining what goes in and out of your pipes
FilesInputing or outputing files from your runs
Pipeline buildingBuilding pipelines - how it works
RunsRunning a pipeline remotely - how it works

Below is some sample python that demonstrates various features and how to use the Pipeline SDK to create a simple pipeline that can be run locally or on Mystic.

from pathlib import Path
from typing import List

import torch
from diffusers import StableDiffusionPipeline

from pipeline import Pipeline, Variable, pipe, entity
from pipeline.cloud import compute_requirements
from pipeline.objects import File
from pipeline.objects.graph import InputField, InputSchema


class ModelKwargs(InputSchema): # TUTORIAL: Input/output types
    height: int | None = InputField(default=512, ge=64, le=1024)
    width: int | None = InputField(default=512, ge=64, le=1024)
    num_inference_steps: int | None = InputField(default=50)
    num_images_per_prompt: int | None = InputField(default=1, ge=1, le=4)
    guidance_scale: int | None = InputField(default=7.5)


@entity # TUTORIAL: Entity objects
class StableDiffusionModel:
    @pipe(on_startup=True, run_once=True) # TUTORIAL: Cold start optimisations
    def load(self):
        model_id = "runwayml/stable-diffusion-v1-5"
        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.pipe = StableDiffusionPipeline.from_pretrained(
            model_id,
        )
        self.pipe = self.pipe.to(device)

    @pipe
    def predict(self, prompt: str, kwargs: ModelKwargs) -> List[File]: # TUTORIAL: Input/output types
        defaults = kwargs.to_dict()
        images = self.pipe(prompt, **defaults).images

        output_images = []
        for i, image in enumerate(images):
            path = Path(f"/tmp/sd/image-{i}.jpg")
            path.parent.mkdir(parents=True, exist_ok=True)
            image.save(str(path))
            output_images.append(File(path=path, allow_out_of_context_creation=True)) # TUTORIAL: Files

        return output_images


with Pipeline() as builder: # TUTORIAL: Pipeline building
    prompt = Variable(str)
    kwargs = Variable(ModelKwargs)
    model = StableDiffusionModel()
    model.load()
    output = model.predict(prompt, kwargs)
    builder.output(output)

my_pl = builder.get_pipeline()

Development

This project is made with poetry, so firstly setup poetry on your machine.

Once that is done, please run

./setup.sh

With this you should be good to go. This sets up dependencies, pre-commit hooks and pre-push hooks.

You can manually run pre commit hooks with

pre-commit run --all-files

To run tests manually please run

pytest

For developing v4, i.e. containerized pipelines, you need to override the installed pipeline-ai python package on the container. This can be done by bind mounting your target pipeline directory, e.g. using raw docker

License

Pipeline is licensed under Apache Software License Version 2.0.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc