Run AI on Any Infra — Unified, Faster, Cheaper
:fire: News :fire:
- [Feb 2025] Prepare and serve Retrieval Augmented Generation (RAG) with DeepSeek-R1: blog post, example
- [Feb 2025] Run and serve DeepSeek-R1 671B using SkyPilot and SGLang with high throughput: example
- [Feb 2025] Prepare and serve large-scale image search with vector databases: blog post, example
- [Jan 2025] Launch and serve distilled models from DeepSeek-R1 and Janus on Kubernetes or any cloud: R1 example and Janus example
- [Oct 2024] :tada: SkyPilot crossed 1M+ downloads :tada:: Thank you to our community! Twitter/X
- [Sep 2024] Point, launch and serve Llama 3.2 on Kubernetes or any cloud: example
- [Sep 2024] Run and deploy Pixtral, the first open-source multimodal model from Mistral AI.
- [Jun 2024] Reproduce GPT with llm.c on any cloud: guide
- [Apr 2024] Serve Qwen-110B on your infra: example
- [Apr 2024] Host Ollama on the cloud to deploy LLMs on CPUs and GPUs: example
LLM Finetuning Cookbooks: Finetuning Llama 2 / Llama 3.1 in your own cloud environment, privately: Llama 2 example and blog; Llama 3.1 example and blog
SkyPilot is a framework for running AI and batch workloads on any infra, offering unified execution, high cost savings, and high GPU availability.
SkyPilot abstracts away infra burdens:
- Launch clusters, jobs, and serving on any infra
- Easy job management: queue, run, and auto-recover many jobs
SkyPilot supports multiple clusters, clouds, and hardware (the Sky):
- Bring your reserved GPUs, Kubernetes clusters, or 12+ clouds
- Flexible provisioning of GPUs, TPUs, CPUs, with auto-retry
SkyPilot cuts your cloud costs & maximizes GPU availability:
- Autostop: automatic cleanup of idle resources
- Managed Spot: 3-6x cost savings using spot instances, with preemption auto-recovery
- Optimizer: 2x cost savings by auto-picking the cheapest & most available infra
SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
Install with pip:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,nebius]"
To get the latest features and fixes, use the nightly build or install from source:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,nebius]"
Current supported infra (Kubernetes; AWS, GCP, Azure, OCI, Lambda Cloud, Fluidstack, RunPod, Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Nebius):
Getting started
You can find our documentation here.
SkyPilot in 1 minute
A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.
Once written in this unified interface (YAML or Python API), the task can be launched on any available cloud. This avoids vendor lock-in, and allows easily moving jobs to a different provider.
Paste the following into a file my_task.yaml
:
resources:
accelerators: A100:8
num_nodes: 1
workdir: ~/torch_examples
setup: |
pip install "torch<2.2" torchvision --index-url https://download.pytorch.org/whl/cu121
run: |
cd mnist
python main.py --epochs 1
Prepare the workdir by cloning:
git clone https://github.com/pytorch/examples.git ~/torch_examples
Launch with sky launch
(note: access to GPU instances is needed for this example):
sky launch my_task.yaml
SkyPilot then performs the heavy-lifting for you, including:
- Find the lowest priced VM instance type across different clouds
- Provision the VM, with auto-failover if the cloud returned capacity errors
- Sync the local
workdir
to the VM - Run the task's
setup
commands to prepare the VM for running the task - Run the task's
run
commands
See Quickstart to get started with SkyPilot.
Runnable examples
See SkyPilot examples that cover: development, training, serving, LLM models, AI apps, and common frameworks.
Latest featured examples:
Task | Examples |
---|
Training | PyTorch, DeepSpeed, Finetune Llama 3, NeMo, Ray, Unsloth, Jax/TPU |
Serving | vLLM, SGLang, Ollama |
Models | DeepSeek-R1, Llama 3, CodeLlama, Qwen, Mixtral |
AI apps | RAG, vector databases (ChromaDB, CLIP) |
Common frameworks | Airflow, Jupyter |
Source files and more examples can be found in llm/
and examples/
.
More information
To learn more, see SkyPilot Overview, SkyPilot docs, and SkyPilot blog.
Case studies and integrations: Community Spotlights
Follow updates:
Read the research:
SkyPilot was initially started at the Sky Computing Lab at UC Berkeley and has since gained many industry contributors. To read about the project's origin and vision, see Concept: Sky Computing.
Questions and feedback
We are excited to hear your feedback:
For general discussions, join us on the SkyPilot Slack.
Contributing
We welcome all contributions to the project! See CONTRIBUTING for how to get involved.