Launch Week Day 2: Introducing Reports: An Extensible Reporting Framework for Socket Data.Learn More
Socket
Book a DemoSign in
Socket

@smartledger/lumen-llm

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@smartledger/lumen-llm

Universal LLM provider abstraction for LumenChat with structured JSON responses

latest
Source
npmnpm
Version
1.0.2
Version published
Maintainers
1
Created
Source

@lumenchat/llm

Universal LLM provider abstraction for LumenChat with structured JSON responses and cryptographic signatures.

Features

  • Provider Abstraction: Unified interface for OpenAI, Anthropic, Llama, etc.
  • Structured JSON: Schema-validated JSON responses
  • Cryptographic Signing: Sign responses with agent keys
  • Extensible: Easy to add new providers
  • Type Safety: Clear interfaces and error handling

Installation

npm install @lumenchat/llm @lumenchat/signatures

Usage

Basic Structured JSON Generation

import { generateStructuredJSON, OpenAIProvider } from '@lumenchat/llm';

const schema = {
  type: 'object',
  properties: {
    answer: { type: 'string' },
    confidence: { type: 'number' }
  },
  required: ['answer', 'confidence']
};

const messages = [
  { role: 'user', content: 'What is 2+2?' }
];

const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY });
const response = await generateStructuredJSON(messages, schema, {}, provider);

// {
//   answer: 'The answer is 4',
//   confidence: 1.0,
//   _llm: {
//     provider: 'openai',
//     model: 'gpt-4o-mini',
//     elapsed: 1234,
//     temperature: 0.4
//   }
// }

Signed Structured Responses

import { signedStructuredResponse } from '@lumenchat/llm';
import { createAgentKeys } from '@lumenchat/signatures';

const agentKeys = createAgentKeys('MathAgent');

const response = await signedStructuredResponse(
  messages,
  schema,
  agentKeys,
  { temperature: 0.3 }
);

// {
//   answer: 'The answer is 4',
//   confidence: 1.0,
//   _llm: { ... },
//   _signature: {
//     signature: '3044022...',
//     publicKey: '02abc...',
//     address: '1XYZ...',
//     timestamp: '2025-11-20T...',
//     algorithm: 'BSV-ECDSA-DER',
//     responseHash: 'a1b2c3...',
//     agentIdentity: 'MathAgent'
//   }
// }

Custom Provider

import { BaseLLMProvider } from '@lumenchat/llm';

class CustomProvider extends BaseLLMProvider {
  async complete(messages, options = {}) {
    // Your implementation
    const response = await yourAPI.generate(messages);
    return this.normalizeResponse(response);
  }

  normalizeResponse(rawResponse) {
    return {
      success: true,
      content: rawResponse.data,
      rawContent: JSON.stringify(rawResponse),
      provider: 'custom',
      model: 'your-model',
      elapsed: 0
    };
  }

  getName() {
    return 'custom';
  }

  async isAvailable() {
    return true;
  }

  getCapabilities() {
    return {
      structuredOutput: true,
      streaming: false,
      contextWindow: 8000,
      maxTokens: 2048
    };
  }
}

Provider Factory

import { createProvider } from '@lumenchat/llm';

const provider = createProvider('openai', {
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4o-mini',
  temperature: 0.3
});

const response = await provider.complete(messages);

API Reference

generateStructuredJSON(messages, schema, options, provider)

Generate schema-validated JSON response.

Parameters:

  • messages (Array): Chat messages with role and content
  • schema (Object): JSON Schema for response validation
  • options (Object): Generation options (temperature, model, etc.)
  • provider (BaseLLMProvider): LLM provider instance (default: OpenAIProvider)

Returns: Promise<Object> - Response with _llm metadata

signedStructuredResponse(messages, schema, agentKey, options, provider)

Generate signed structured JSON response.

Parameters:

  • messages (Array): Chat messages
  • schema (Object): JSON Schema
  • agentKey (Object): Agent keys from @lumenchat/signatures
  • options (Object): Generation options
  • provider (BaseLLMProvider): LLM provider instance

Returns: Promise<Object> - Response with _llm and _signature

createProvider(type, config)

Factory function to create provider instances.

Parameters:

  • type (string): Provider type ('openai', 'anthropic', 'llama')
  • config (Object): Provider configuration

Returns: BaseLLMProvider instance

BaseLLMProvider

Abstract base class for LLM providers.

Methods:

  • async complete(messages, options) - Generate completion
  • normalizeResponse(rawResponse) - Normalize to standard format
  • getName() - Get provider name
  • async isAvailable() - Check availability
  • getCapabilities() - Get provider capabilities

OpenAIProvider

OpenAI implementation of BaseLLMProvider.

Constructor Options:

  • apiKey (string): OpenAI API key (or use OPENAI_API_KEY env)
  • model (string): Model name (default: 'gpt-4o-mini')
  • temperature (number): Default temperature (default: 0.4)

Response Format

All structured responses include:

{
  // Schema fields...
  field1: value1,
  field2: value2,
  
  // LLM metadata
  _llm: {
    provider: 'openai',
    model: 'gpt-4o-mini',
    elapsed: 1234,  // milliseconds
    temperature: 0.4
  },
  
  // Signature (if using signedStructuredResponse)
  _signature: {
    signature: '3044022...',
    publicKey: '02abc...',
    address: '1XYZ...',
    timestamp: '2025-11-20T...',
    algorithm: 'BSV-ECDSA-DER',
    responseHash: 'a1b2c3...',
    agentIdentity: 'AgentName'
  }
}

Environment Variables

  • OPENAI_API_KEY: OpenAI API key

License

PROPRIETARY - Copyright © 2025 Gregory J. Ward and SmartLedger.Technology

Keywords

lumen

FAQs

Package last updated on 21 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts