New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

q-ollama

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

q-ollama

A Node.js package for interacting with Ollama and Baichuan AI models with flexible API and CLI support

latest
Source
npmnpm
Version
1.0.0
Version published
Maintainers
1
Created
Source

Q-Ollama

A Node.js package for interacting with Ollama and Baichuan AI models with flexible API and CLI support.

Features

  • 🤖 Support for Ollama and Baichuan AI models
  • 🔄 Dynamic provider switching
  • 💬 Interactive chat and single message support
  • 🔧 Full TypeScript support
  • 🚀 Command-line tool support
  • 🐛 Debug mode with detailed logging
  • 📚 Comprehensive test cases

Installation

npm install q-ollama

Quick Start

1. Using Ollama

Make sure Ollama service is running (default: http://localhost:11434)

const { QOllama, ProviderType } = require('q-ollama');

// Create instance
const qollama = new QOllama({
  provider: ProviderType.OLLAMA,
  ollamaBaseUrl: 'http://localhost:11434',
  defaultModel: 'qwen3:8b',
  debug: true
});

// Quick chat
async function chat() {
  const response = await qollama.quickChat('Hello, please introduce yourself');
  console.log('AI Response:', response.content);
}

chat();

2. Using Baichuan Model

Set environment variable BAICHUAN_API_KEY or provide API key directly:

const { QOllama, ProviderType } = require('q-ollama');

const qollama = new QOllama({
  provider: ProviderType.BAICHUAN,
  baichuanApiKey: 'your-api-key-here', // or use environment variable
  defaultModel: 'Baichuan2-Turbo'
});

async function chat() {
  const response = await qollama.quickChat('Hello');
  console.log('Baichuan Response:', response.content);
}

chat();

3. Dynamic Provider Switching

// Start with Ollama
const qollama = new QOllama({
  provider: ProviderType.OLLAMA,
  defaultModel: 'qwen3:8b'
});

console.log('Current provider:', qollama.getCurrentProvider()); // ollama

// Switch to Baichuan
qollama.switchProvider({
  provider: ProviderType.BAICHUAN,
  baichuanApiKey: process.env.BAICHUAN_API_KEY
});

console.log('Switched provider:', qollama.getCurrentProvider()); // baichuan

API Reference

QOllama Class

Constructor

new QOllama(config: QOllamaConfig)

Configuration options:

interface QOllamaConfig {
  provider: ProviderType;          // Model provider
  ollamaBaseUrl?: string;          // Ollama service URL
  baichuanApiKey?: string;         // Baichuan API key
  defaultModel?: string;           // Default model
  debug?: boolean;                 // Debug mode
}

Methods

  • chat(messages: ChatMessage[], options?: ChatOptions): Promise<ChatResponse> - Send chat messages
  • quickChat(prompt: string, options?: ChatOptions): Promise<ChatResponse> - Quick single message
  • switchProvider(newConfig: QOllamaConfig): void - Switch model provider
  • getCurrentProvider(): string - Get current provider
  • supportsStreaming(): boolean - Check if streaming is supported
  • listModels(): Promise<string[]> - List available models
  • setDebug(debug: boolean): void - Set debug mode

Helper Functions

const { createQOllama, createOllamaProvider, createBaichuanProvider } = require('q-ollama');

// Quick instance creation
const qollama1 = createQOllama(config);
const qollama2 = createOllamaProvider('http://localhost:11434', true);
const qollama3 = createBaichuanProvider('your-api-key', true);

Command Line Tool

After installation, use the q-ollama command:

Interactive Chat

# Using Ollama
q-ollama chat --provider ollama --model qwen3:8b

# Using Baichuan
q-ollama chat --provider baichuan --model Baichuan2-Turbo --key YOUR_API_KEY

Single Message

q-ollama message "Hello world" --provider ollama --model qwen3:8b

List Available Models

q-ollama list-models --provider ollama

Full Command Help

q-ollama --help

Debug Mode

Enable debug mode to see detailed request and response information:

const qollama = new QOllama({
  provider: ProviderType.OLLAMA,
  debug: true  // Enable debug
});

// Or enable at runtime
qollama.setDebug(true);

Debug output includes:

  • Method call parameters
  • API request details
  • Response data
  • Error information

Environment Variables

  • BAICHUAN_API_KEY - Baichuan model API key

Development

Build Project

npm run build

Run Tests

npm test

Development Mode

npm run dev

Examples

Check the examples/ directory for complete examples:

node examples/basic-usage.js

License

MIT

Contributing

Issues and Pull Requests are welcome!

Support

If you encounter issues:

  • Check debug mode output
  • Ensure related services are running properly
  • Check test cases for correct usage

Keywords

ollama

FAQs

Package last updated on 05 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts