
π£οΈ Real-time conversations | - Three speech engines: espeak, SpeechT5, OpenVoice - Auto language detection (OpenVoice) - Real-time voice-chat with LLMs | π€ Customizable AI Agents | - Custom agent names, moods, personalities - Retrieval-Augmented Generation (RAG) - Create AI personalities and moods | π Enhanced Knowledge Retrieval | - RAG for documents/websites - Use local data to enrich chat | πΌοΈ Image Generation & Manipulation | - Text-to-Image (Stable Diffusion 1.5, SDXL, Turbo) - Drawing tools & ControlNet - LoRA & Embeddings - Inpainting, outpainting, filters | π Multi-lingual Capabilities | - Partial multi-lingual TTS/STT/interface - English & Japanese GUI | π Privacy and Security | - Runs locally, no external API (default) - Customizable LLM guardrails & image safety - Disables HuggingFace telemetry - Restricts network access | β‘ Performance & Utility | - Fast generation (~2s on RTX 2080s) - Docker-based setup & GPU acceleration - Theming (Light/Dark/System) - NSFW toggles - Extension API - Python library & API support |
π Language Support
English | β
| β
| β
| β
| Japanese | β
| β
| β | β
| Spanish | β
| β
| β | β | French | β
| β
| β | β | Chinese | β
| β
| β | β | Korean | β
| β
| β | β |
|
|
πΎ Installation Quick Start
βοΈ System Requirements
OS | Ubuntu 22.04, Windows 10 | Ubuntu 22.04 (Wayland) |
CPU | Ryzen 2700K or Intel Core i7-8700K | Ryzen 5800X or Intel Core i7-11700K |
Memory | 16 GB RAM | 32 GB RAM |
GPU | NVIDIA RTX 3060 or better | NVIDIA RTX 4090 or better |
Network | Broadband (used to download models) | Broadband (used to download models) |
Storage | 22 GB (with models), 6 GB (without models) | 100 GB or higher |
π§ Installation Steps
- Install system requirements
sudo apt update && sudo apt upgrade -y
sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git nvidia-cuda-toolkit pipewire libportaudio2 libxcb-cursor0 gnupg gpg-agent pinentry-curses espeak xclip cmake qt6-qpa-plugins qt6-wayland qt6-gtk-platformtheme mecab libmecab-dev mecab-ipadic-utf8 libxslt-dev
sudo apt install espeak
sudo apt install espeak-ng-espeak
- Create
airunner
directory
sudo mkdir ~/.local/share/airunner
sudo chown $USER:$USER ~/.local/share/airunner
- Install AI Runner - Python 3.13+ required
pyenv
and venv
are recommended (see wiki for more info)
pip install "typing-extensions==4.13.2"
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install airunner[all_dev]
pip install -U timm
- Run AI Runner
airunner
For more options, including Docker, see the Installation Wiki.
Basic Usage
- Run AI Runner:
airunner
- Run the downloader:
airunner-setup
- Build templates:
airunner-build-ui
π€ Models
These are the sizes of the optional models that power AI Runner.
Text-to-Speech | | OpenVoice (Voice) | 4.0 GB | Speech T5 (Voice) | 654.4 MB | Speech-to-Text | | Whisper Tiny | 155.4 MB | Text Generation | | Ministral 8b (default) | 4.0 GB | Whisper Tiny | 155.4 MB | Ollama (various models) | 1.5 GB - 20 GB | OpenRouter (various models) | 1.5 GB - 20 GB | Huggingface (various models) | 1.5 GB - 20 GB | Ministral instruct 8b (4bit) | 5.8 GB | Image Generation | | Controlnet (SD 1.5) | 10.6 GB | Controlnet (SDXL) | 320.2 MB | Safety Checker + Feature Extractor | 3.2 GB | SD 1.5 | 1.6 MB | SDXL 1.0 | 6.45 MB |
Stack
AI Runner uses the following stack
- SQLite: For local data storage
- Alembic: For database migrations
- SQLAlchemy: For ORM
- Pydantic: For data validation
- http.server: Basic local server for static files
- PySide6: For the GUI
- A variety of other libraries for TTS, STT, LLMs, and image generation
|
β¨ LLM Vendors
- Default local model: Ministral 8b instruct 4bit
- Ollama:: A variety of local models to choose from (requires Ollama CLI)
- OpenRouter: Remove server-side LLMs (requires API key)
- Huggingface: Coming soon
π¨ Art Models
By default, AI Runner installs essential TTS/STT and minimal LLM components, but AI art models must be supplied by the user.
Organize them under your local AI Runner data directory:
~/.local/share/airunner
βββ art
β βββ models
β βββ SD 1.5
β β βββ controlnet
β β βββ embeddings
β β βββ inpaint
β β βββ lora
β β βββ txt2img
β βββ Flux (not supported yet)
β βββ SDXL 1.0
β β βββ controlnet
β β βββ embeddings
β β βββ inpaint
β β βββ lora
β β βββ txt2img
β βββ SDXL Turbo
β βββ controlnet
β βββ embeddings
β βββ inpaint
β βββ lora
β βββ txt2img
Optional third-party services
- OpenStreetMap: Map API
- OpenMeteo: Weather API
|
Contributing
We welcome pull requests for new features, bug fixes, or documentation improvements. You can also build and share extensions to expand AI Runnerβs functionality. For details, see the Extensions Wiki.
Take a look at the Contributing document and the Development wiki page for detailed instructions.
π§ͺ Testing & Test Organization
AI Runner uses pytest
for all automated testing. Test coverage is a priority, especially for utility modules.
Test Directory Structure
- Headless-safe tests:
- Display-required (Qt/Xvfb) tests:
CI/CD
- By default, only headless-safe tests are run in CI.
- Display-required tests are intended for manual or special-case runs (e.g., when working on Qt threading or background worker code).
- (Optional) You may automate this split in CI by adding a separate job/step for xvfb tests.
General Testing Guidelines
- All new utility code must be accompanied by tests.
- Use
pytest
, pytest-qt
(for GUI), and unittest.mock
for mocking dependencies.
- For more details on writing and organizing tests, see the project coding guidelines and the
src/airunner/utils/tests/
folder.