Socket
Book a DemoInstallSign in
Socket

@agentic-kit/ollama

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@agentic-kit/ollama

Ollama

0.1.2
latest
Source
npmnpm
Version published
Weekly downloads
4
Maintainers
1
Weekly downloads
 
Created
Source

Ollama Client

A JavaScript/TypeScript client for the Ollama LLM server, supporting model listing, text generation, streaming responses, embeddings, and model management.

Installation

npm install @agentic-kit/ollama
# or
yarn add @agentic-kit/ollama

Usage

import OllamaClient, { GenerateInput } from '@agentic-kit/ollama';

// Create a client (default port 11434)
const client = new OllamaClient('http://localhost:11434');

// List available models
const models = await client.listModels();
console.log('Available models:', models);

// Non-streaming text generation
const output = await client.generate({ model: 'mistral', prompt: 'Hello, Ollama!' });
console.log(output);

// Streaming generation
await client.generate(
  { model: 'mistral', prompt: 'Hello, streaming!', stream: true },
  (chunk) => {
    console.log('Received chunk:', chunk);
  }
);

// Pull a model to local cache
await client.pullModel('mistral');

// Generate embeddings
const embedding = await client.generateEmbedding('Compute embeddings');
console.log('Embedding vector length:', embedding.length);

// Generate a conversational response with context
const response = await client.generateResponse(
  'What is the capital of France?',
  'Geography trivia'
);
console.log(response);

// Delete a pulled model when done
await client.deleteModel('mistral');

API Reference

  • new OllamaClient(baseUrl?: string) – defaults to http://localhost:11434
  • .listModels(): Promise<string[]>
  • .generate(input: GenerateInput, onChunk?: (chunk: string) => void): Promise<string | void>
  • .generateStreamingResponse(prompt: string, onChunk: (chunk: string) => void, context?: string): Promise<void>
  • .generateEmbedding(text: string): Promise<number[]>
  • .generateResponse(prompt: string, context?: string): Promise<string>
  • .pullModel(model: string): Promise<void>
  • .deleteModel(model: string): Promise<void>

GenerateInput type

interface GenerateInput {
  model: string;
  prompt: string;
  stream?: boolean;
}

Contributing

Please open issues or pull requests on GitHub.

API Response Format

The Ollama /api/tags endpoint returns the following JSON structure:

{
  "models": [
    {
      "name": "mistral:latest",
      "model": "mistral:latest",
      "modified_at": "2025-06-09T04:48:21.588888008Z",
      "size": 4113301824,
      "digest": "...",
      "details": {
        "parent_model": "",
        "format": "gguf",
        "family": "llama",
        "families": ["llama"],
        "parameter_size": "7.2B",
        "quantization_level": "Q4_0"
      }
    }
  ]
}

The listModels() method extracts and returns just the model names:

const client = new OllamaClient('http://localhost:11434');
const models = await client.listModels();
console.log(models); // ["mistral:latest", "llama2:latest", ...]

© Hyperweb (formerly Cosmology). See LICENSE for full licensing and disclaimer.

FAQs

Package last updated on 09 Jun 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.