ollama
Interface with an ollama instance over HTTP.
Table of Contents
Install
npm i ollama
Usage
import { Ollama } from "ollama";
const ollama = new Ollama();
for await (const token of ollama.generate("llama2", "What is a llama?")) {
process.stdout.write(token);
}
API
The API aims to mirror the HTTP API for Ollama.
Ollama
new Ollama(config);
config
<Object>
The configuration object for Ollama.
address
<string>
The Ollama API address. Default: "http://localhost:11434"
.
Create a new API handler for ollama.
generate
ollama.generate(model, prompt);
model
<string>
The name of the model to use for the prompt.prompt
<string>
The prompt to give the model.- Returns:
<AsyncGenerator<string, GenerateResult>>
A generator that outputs the tokens as strings.
Generate a response for a given prompt with a provided model. The final response object will include statistics and additional data from the request.
create
ollama.create(name, path);
name
<string>
The name of the model.path
<string>
The path to the Modelfile.- Returns:
AsyncGenerator<CreateStatus>
A generator that outputs the status of creation.
Create a model from a Modelfile.
tags
ollama.tags();
- Returns:
Promise<Tag[]>
A list of tags.
List models that are available locally.
copy
ollama.copy(source, destination);
source
<string>
The name of the model to copy.destination
<string>
The name of copied model.- Returns:
Promise<void>
Copy a model. Creates a model with another name from an existing model.
delete
ollama.delete(model);
model
<string>
The name of the model to delete.- Returns:
Promise<void>
Delete a model and its data.
pull
ollama.pull(name);
name
<string>
The name of the model to download.- Returns:
AsyncGenerator<PullResult>
A generator that outputs the status of the download.
Download a model from a the model registry. Cancelled pulls are resumed from where they left off, and multiple calls to will share the same download progress.
embeddings
ollama.embeddings(model, prompt);
model
<string>
The name of the model to generate embeddings for.prompt
<string>
The prompt to generate embeddings with.- Returns:
Promise<number[]>
The embeddings.
Generate embeddings from a model.
Building
To build the project files run:
npm run build
Testing
To lint files:
npm run lint