Ollama Model Generator
![NodeJS Version](https://img.shields.io/node/v/ollama-model-generator?logo=node.js&logoColor=white)
Introduction
This NodeJS CLI script simplifies adding GGUF models to Ollama by creating symlinks and downloading necessary metadata
from the Ollama Registry.
Benefits:
- Avoids model duplication within Ollama.
- Easy integration of GGUF models.
- No dependencies besides NodeJS.
Installation
Requires NodeJS version 18.11.0 or higher. Install it globally using npm:
npm install -g ollama-model-generator
Usage
ollama-model-generator [options]
Options:
--model <path> Path to the GGUF model file. This will be symlinked to Ollama blob storage.
If the file doesn't exist, it will be downloaded from the Ollama Registry based on --from.
Optional. If not provided, the model will be downloaded to the Ollama blob storage.
--from, -f <name> Model name in the Ollama Registry to download as a base.
Default: architecture of the GGUF model.
--name, -n <name> Name of the new model in Ollama.
Default: basename-size-finetune-version of the GGUF model.
If --model is not provided, it defaults to the name from --from.
--show, -s Prints model metadata from the GGUF file header as JSON (requires --model).
--registry, -r <registry> The Ollama Registry URL.
Default: registry.ollama.ai
--dir, -d <path> Directory for storing Ollama model data.
Default: $OLLAMA_MODELS or ~/.ollama/models
Additional files can be symlinked in the same way as --model (
see Ollama Model File):
--adapter, --embed, --license, --messages, --params, --projector, --prompt, --system, --template
Example
Download a model from the Ollama Registry
ollama-model-generator --from gemma2
This will download the Gemma 2 model from the Ollama Registry and configure it in Ollama (same as ollama pull gemma2
).
Use a local GGUF model
ollama-model-generator --from llama3.1 --model my-model.gguf --name LLama3.1-MyModel
This will use the local my-model.gguf
file and configure it in Ollama with the name LLama3.1-MyModel
.
The Ollama metadata (template, params etc.) is taken from the Llama 3.1 model.
Use custom template
ollama-model-generator --from gemma2 --template my-template.txt
This will download the Gemma 2 model but use the local my-template.txt
file as prompt template.
Print GGUF metadata
ollama-model-generator --show --model my-model.gguf
Prints the GGUF metadata of the model file as JSON.