Socket
Socket
Sign inDemoInstall

ollama

Package Overview
Dependencies
0
Maintainers
1
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

ollama

The official Python client for Ollama.


Maintainers
1

Readme

Ollama Python Library

The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with Ollama.

Prerequisites

You need to have a local ollama server running to be able to continue. To do this:

  • Download: https://ollama.com/
  • Run an LLM: https://ollama.com/library
    • Example: ollama run llama2
    • Example: ollama run llama2:70b

Then:

curl https://ollama.ai/install.sh | sh
ollama serve

Next you can go ahead with ollama-python.

Install

pip install ollama

Usage

import ollama
response = ollama.chat(model='llama3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])

Streaming responses

Response streaming can be enabled by setting stream=True, modifying function calls to return a Python generator where each part is an object in the stream.

import ollama

stream = ollama.chat(
    model='llama3',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)

API

The Ollama Python library's API is designed around the Ollama REST API

Chat

ollama.chat(model='llama3', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])

Generate

ollama.generate(model='llama3', prompt='Why is the sky blue?')

List

ollama.list()

Show

ollama.show('llama3')

Create

modelfile='''
FROM llama3
SYSTEM You are mario from super mario bros.
'''

ollama.create(model='example', modelfile=modelfile)

Copy

ollama.copy('llama3', 'user/llama3')

Delete

ollama.delete('llama3')

Pull

ollama.pull('llama3')

Push

ollama.push('user/llama3')

Embeddings

ollama.embeddings(model='llama3', prompt='The sky is blue because of rayleigh scattering')

Ps

ollama.ps()

Custom client

A custom client can be created with the following fields:

  • host: The Ollama host to connect to
  • timeout: The timeout for requests
from ollama import Client
client = Client(host='http://localhost:11434')
response = client.chat(model='llama3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])

Async client

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama3', messages=[message])

asyncio.run(chat())

Setting stream=True modifies functions to return a Python asynchronous generator:

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama3', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())

Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.error)
  if e.status_code == 404:
    ollama.pull(model)

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc