Ollama LLM Bridge
Universal Ollama LLM Bridge supporting multiple models (Llama, Gemma, etc.) with a unified interface.
🚀 Features
- Universal Ollama Support: Single package supporting all Ollama models
- Model Auto-Detection: Automatically resolves appropriate model implementation
- Type Safety: Full TypeScript support with comprehensive type definitions
- Streaming Support: Native streaming API support
- Multi-Modal: Image support for compatible models (Llama 3.2+)
- Error Handling: Robust error handling with standardized error types
- Extensible: Easy to add new model support
📦 Installation
pnpm add ollama-llm-bridge llm-bridge-spec ollama zod
npm install ollama-llm-bridge llm-bridge-spec ollama zod
yarn add ollama-llm-bridge llm-bridge-spec ollama zod
🏗️ Architecture
This package follows the Abstract Model Pattern inspired by the bedrock-llm-bridge:
ollama-llm-bridge/
├── models/
│ ├── base/AbstractOllamaModel # Abstract base class
│ ├── llama/LlamaModel # Llama implementation
│ ├── gemma/GemmaModel # Gemma implementation
│ └── gpt-oss/GptOssModel # GPT-OSS implementation
├── bridge/OllamaBridge # Main bridge class
├── factory/ # Factory functions
└── utils/error-handler # Error handling
🎯 Quick Start
Basic Usage
import { createOllamaBridge } from 'ollama-llm-bridge';
const bridge = createOllamaBridge({
host: 'http://localhost:11434',
model: 'llama3.2',
temperature: 0.7,
});
const response = await bridge.invoke({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }],
});
console.log(response.choices[0].message.content[0].text);
Streaming
const stream = bridge.invokeStream({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Tell me a story' }] }],
});
for await (const chunk of stream) {
const text = chunk.choices[0]?.message?.content[0]?.text;
if (text) {
process.stdout.write(text);
}
}
Multi-Modal (Llama 3.2+)
const response = await bridge.invoke({
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see in this image?' },
{ type: 'image', data: 'base64_encoded_image_data' },
],
},
],
});
🔧 Factory Functions
Main Factory
import { createOllamaBridge } from 'ollama-llm-bridge';
const bridge = createOllamaBridge({
host: 'http://localhost:11434',
model: 'llama3.2',
temperature: 0.7,
num_predict: 4096,
});
Convenience Factories
import {
createLlamaBridge,
createGemmaBridge,
createGptOssBridge,
createDefaultOllamaBridge,
} from 'ollama-llm-bridge';
const llamaBridge = createLlamaBridge({
model: 'llama3.2',
temperature: 0.8,
});
const gemmaBridge = createGemmaBridge({
model: 'gemma3n:7b',
num_predict: 1024,
});
const gptOssBridge = createGptOssBridge({
model: 'gpt-oss-20:b',
});
const defaultBridge = createDefaultOllamaBridge({
temperature: 0.5,
});
📋 Supported Models
Llama Models
llama3.2 (with multi-modal support)
llama3.1
llama3
llama2
llama
Gemma Models
gemma3n:latest
gemma3n:7b
gemma3n:2b
gemma2:latest
gemma2:7b
gemma2:2b
gemma:latest
gemma:7b
gemma:2b
GPT-OSS Models
⚙️ Configuration
interface OllamaBaseConfig {
host?: string;
model: string;
temperature?: number;
top_p?: number;
top_k?: number;
num_predict?: number;
stop?: string[];
seed?: number;
stream?: boolean;
}
🎭 Model Capabilities
const capabilities = bridge.getMetadata();
console.log(capabilities);
const features = bridge.model.getCapabilities();
console.log(features.multiModal);
console.log(features.streaming);
console.log(features.functionCalling);
🚦 Error Handling
The bridge provides comprehensive error handling with standardized error types:
import { NetworkError, ModelNotSupportedError, ServiceUnavailableError } from 'llm-bridge-spec';
try {
const response = await bridge.invoke(prompt);
} catch (error) {
if (error instanceof NetworkError) {
console.error('Network issue:', error.message);
} else if (error instanceof ModelNotSupportedError) {
console.error('Unsupported model:', error.requestedModel);
console.log('Supported models:', error.supportedModels);
} else if (error instanceof ServiceUnavailableError) {
console.error('Ollama server unavailable. Retry after:', error.retryAfter);
}
}
🔄 Model Switching
const bridge = createOllamaBridge({ model: 'llama3.2' });
bridge.setModel('gemma3n:latest');
console.log(bridge.getCurrentModel());
console.log(bridge.getSupportedModels());
🧪 Testing
pnpm test
pnpm test:coverage
pnpm test:e2e
📊 Comparison with Previous Packages
| Code Duplication | ❌ High | ❌ High | ✅ Eliminated |
| Model Support | 🔶 Llama only | 🔶 Gemma only | ✅ Universal |
| Architecture | 🔶 Basic | 🔶 Basic | ✅ Abstract Pattern |
| Extensibility | ❌ Limited | ❌ Limited | ✅ Easy to extend |
| Maintenance | ❌ Multiple packages | ❌ Multiple packages | ✅ Single package |
🔮 Roadmap
🤝 기여하기
이 프로젝트는 Git Workflow Guide를 따릅니다.
📄 License
MIT License - see the LICENSE file for details.
🙏 Acknowledgments
Made with ❤️ by the LLM Bridge Team