✨ Finetune for Free
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, Ollama, vLLM or uploaded to Hugging Face.
🔗 Links and Resources
⭐ Key Features
- All kernels written in OpenAI's Triton language. Manual backprop engine.
- 0% loss in accuracy - no approximation methods - all exact.
- No change of hardware. Supports NVIDIA GPUs since 2018+. Minimum CUDA Capability 7.0 (V100, T4, Titan V, RTX 20, 30, 40x, A100, H100, L40 etc) Check your GPU! GTX 1070, 1080 works, but is slow.
- Works on Linux and Windows via WSL.
- Supports 4bit and 16bit QLoRA / LoRA finetuning via bitsandbytes.
- Open source trains 5x faster - see Unsloth Pro for up to 30x faster training!
- If you trained a model with 🦥Unsloth, you can use this cool sticker!
💾 Installation Instructions
These are utilities for Unsloth, so install Unsloth as well! For stable releases for Unsloth Zoo, use pip install unsloth_zoo
. We recommend pip install "unsloth_zoo @ git+https://github.com/unslothai/unsloth-zoo.git"
for most installations though.
pip install unsloth_zoo
License
Unsloth Zoo is licensed under the GNU Affero General Public License.