๐Ÿšจ Shai-Hulud Strikes Again:834 Packages Compromised.Technical Analysis โ†’
Socket
Book a DemoInstallSign in
Socket

ollama-llm-bridge

Package Overview
Dependencies
Maintainers
1
Versions
7
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

ollama-llm-bridge

Universal Ollama LLM Bridge for multiple models (Llama, Gemma, etc.)

Source
npmnpm
Version
0.0.5
Version published
Weekly downloads
3
-70%
Maintainers
1
Weekly downloads
ย 
Created
Source

Ollama LLM Bridge

Universal Ollama LLM Bridge supporting multiple models (Llama, Gemma, etc.) with a unified interface.

๐Ÿš€ Features

  • Universal Ollama Support: Single package supporting all Ollama models
  • Model Auto-Detection: Automatically resolves appropriate model implementation
  • Type Safety: Full TypeScript support with comprehensive type definitions
  • Streaming Support: Native streaming API support
  • Multi-Modal: Image support for compatible models (Llama 3.2+)
  • Error Handling: Robust error handling with standardized error types
  • Extensible: Easy to add new model support

๐Ÿ“ฆ Installation

# pnpm (๊ถŒ์žฅ)
pnpm add ollama-llm-bridge llm-bridge-spec ollama zod

# npm
npm install ollama-llm-bridge llm-bridge-spec ollama zod

# yarn
yarn add ollama-llm-bridge llm-bridge-spec ollama zod

๐Ÿ—๏ธ Architecture

This package follows the Abstract Model Pattern inspired by the bedrock-llm-bridge:

ollama-llm-bridge/
โ”œโ”€โ”€ models/
โ”‚   โ”œโ”€โ”€ base/AbstractOllamaModel     # Abstract base class
โ”‚   โ”œโ”€โ”€ llama/LlamaModel            # Llama implementation
โ”‚   โ”œโ”€โ”€ gemma/GemmaModel            # Gemma implementation
โ”‚   โ””โ”€โ”€ gpt-oss/GptOssModel        # GPT-OSS implementation
โ”œโ”€โ”€ bridge/OllamaBridge             # Main bridge class
โ”œโ”€โ”€ factory/                        # Factory functions
โ””โ”€โ”€ utils/error-handler             # Error handling

๐ŸŽฏ Quick Start

Basic Usage

import { createOllamaBridge } from 'ollama-llm-bridge';

// Create bridge with auto-detected model
const bridge = createOllamaBridge({
  host: 'http://localhost:11434',
  model: 'llama3.2', // or 'gemma3n:latest' or 'gpt-oss-20:b'
  temperature: 0.7,
});

// Simple chat
const response = await bridge.invoke({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }],
});

console.log(response.choices[0].message.content[0].text);

Streaming

// Streaming chat
const stream = bridge.invokeStream({
  messages: [{ role: 'user', content: [{ type: 'text', text: 'Tell me a story' }] }],
});

for await (const chunk of stream) {
  const text = chunk.choices[0]?.message?.content[0]?.text;
  if (text) {
    process.stdout.write(text);
  }
}

Multi-Modal (Llama 3.2+)

const response = await bridge.invoke({
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What do you see in this image?' },
        { type: 'image', data: 'base64_encoded_image_data' },
      ],
    },
  ],
});

๐Ÿ”ง Factory Functions

Main Factory

import { createOllamaBridge } from 'ollama-llm-bridge';

const bridge = createOllamaBridge({
  host: 'http://localhost:11434',
  model: 'llama3.2', // Required
  temperature: 0.7,
  num_predict: 4096,
});

Convenience Factories

import {
  createLlamaBridge,
  createGemmaBridge,
  createGptOssBridge,
  createDefaultOllamaBridge,
} from 'ollama-llm-bridge';

// Llama with defaults
const llamaBridge = createLlamaBridge({
  model: 'llama3.2', // Optional, defaults to 'llama3.2'
  temperature: 0.8,
});

// Gemma with defaults
const gemmaBridge = createGemmaBridge({
  model: 'gemma3n:7b', // Optional, defaults to 'gemma3n:latest'
  num_predict: 1024,
});

// GPT-OSS with defaults
const gptOssBridge = createGptOssBridge({
  model: 'gpt-oss-20:b', // Optional, defaults to 'gpt-oss-20:b'
});

// Default configuration (Llama 3.2)
const defaultBridge = createDefaultOllamaBridge({
  temperature: 0.5, // Override defaults
});

๐Ÿ“‹ Supported Models

Llama Models

  • llama3.2 (with multi-modal support)
  • llama3.1
  • llama3
  • llama2
  • llama

Gemma Models

  • gemma3n:latest
  • gemma3n:7b
  • gemma3n:2b
  • gemma2:latest
  • gemma2:7b
  • gemma2:2b
  • gemma:latest
  • gemma:7b
  • gemma:2b

GPT-OSS Models

  • gpt-oss-20:b

โš™๏ธ Configuration

interface OllamaBaseConfig {
  host?: string; // Default: 'http://localhost:11434'
  model: string; // Required: Model ID
  temperature?: number; // 0.0 - 1.0
  top_p?: number; // 0.0 - 1.0
  top_k?: number; // Integer >= 1
  num_predict?: number; // Max tokens to generate
  stop?: string[]; // Stop sequences
  seed?: number; // Seed for reproducibility
  stream?: boolean; // Default: false
}

๐ŸŽญ Model Capabilities

// Get model capabilities
const capabilities = bridge.getMetadata();

console.log(capabilities);
// {
//   name: 'Llama',
//   version: '3.2',
//   description: 'Ollama Llama Bridge',
//   model: 'llama3.2',
//   contextWindow: 8192,
//   maxTokens: 4096
// }

// Check model features
const features = bridge.model.getCapabilities();
console.log(features.multiModal); // true for Llama 3.2+
console.log(features.streaming); // true for all models
console.log(features.functionCalling); // false (coming soon)

๐Ÿšฆ Error Handling

The bridge provides comprehensive error handling with standardized error types:

import { NetworkError, ModelNotSupportedError, ServiceUnavailableError } from 'llm-bridge-spec';

try {
  const response = await bridge.invoke(prompt);
} catch (error) {
  if (error instanceof NetworkError) {
    console.error('Network issue:', error.message);
  } else if (error instanceof ModelNotSupportedError) {
    console.error('Unsupported model:', error.requestedModel);
    console.log('Supported models:', error.supportedModels);
  } else if (error instanceof ServiceUnavailableError) {
    console.error('Ollama server unavailable. Retry after:', error.retryAfter);
  }
}

๐Ÿ”„ Model Switching

// Create bridge with initial model
const bridge = createOllamaBridge({ model: 'llama3.2' });

// Switch to different model at runtime
bridge.setModel('gemma3n:latest');

// Get current model
console.log(bridge.getCurrentModel()); // 'gemma3n:latest'

// Get supported models
console.log(bridge.getSupportedModels());

๐Ÿงช Testing

# Run unit tests
pnpm test

# Run tests with coverage
pnpm test:coverage

# Run e2e tests (requires running Ollama server)
pnpm test:e2e

๐Ÿ“Š Comparison with Previous Packages

Featurellama3-llm-bridgegemma3n-llm-bridgeollama-llm-bridge
Code DuplicationโŒ HighโŒ Highโœ… Eliminated
Model Support๐Ÿ”ถ Llama only๐Ÿ”ถ Gemma onlyโœ… Universal
Architecture๐Ÿ”ถ Basic๐Ÿ”ถ Basicโœ… Abstract Pattern
ExtensibilityโŒ LimitedโŒ Limitedโœ… Easy to extend
MaintenanceโŒ Multiple packagesโŒ Multiple packagesโœ… Single package

๐Ÿ”ฎ Roadmap

  • Function Calling Support
  • Batch Processing
  • More Ollama Models (CodeLlama, Mistral, etc.)
  • Custom Model Plugins
  • Performance Optimizations

๐Ÿค ๊ธฐ์—ฌํ•˜๊ธฐ

์ด ํ”„๋กœ์ ํŠธ๋Š” Git Workflow Guide๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค.

  • Issues: ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์ด๋‚˜ ๋ฒ„๊ทธ ๋ฆฌํฌํŠธ๋ฅผ GitHub Issues์— ๋“ฑ๋ก
  • ๋ธŒ๋žœ์น˜ ์ƒ์„ฑ: git checkout -b feature/core-new-feature
  • TODO ๊ธฐ๋ฐ˜ ๊ฐœ๋ฐœ: ๊ฐ ์ž‘์—…์„ TODO ๋‹จ์œ„๋กœ ์ปค๋ฐ‹
    git commit -m "โœ… [TODO 1/3] Add new model support"
    
  • ํ’ˆ์งˆ ์ฒดํฌ: ์ปค๋ฐ‹ ์ „ ๋ฐ˜๋“œ์‹œ ํ™•์ธ
    pnpm lint && pnpm test:ci && pnpm build
    
  • PR ์ƒ์„ฑ: GitHub์—์„œ Pull Request ์ƒ์„ฑ
  • ์ฝ”๋“œ ๋ฆฌ๋ทฐ: ์Šน์ธ ํ›„ Squash Merge

๐Ÿ“„ License

MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

Made with โค๏ธ by the LLM Bridge Team

Keywords

llm

FAQs

Package last updated on 17 Aug 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts