New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@lanonasis/ai-sdk

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@lanonasis/ai-sdk

Drop-in AI SDK for browser and Node.js with persistent memory, chat completions, and vortexai-l0 orchestration

latest
Source
npmnpm
Version
0.3.0
Version published
Maintainers
1
Created
Source

@lanonasis/ai-sdk

Drop-in AI SDK for browser and Node.js applications with persistent memory, chat completions, and vortexai-l0 orchestration.

Version

Current Release: v0.3.0

Features

  • 🔑 API Key Authentication - Secure lano_xxx... format keys
  • 💾 Persistent Memory - Built-in integration with @lanonasis/memory-sdk-standalone
  • 🌊 Streaming Support - Real-time response streaming
  • 🎭 Orchestration - Complex multi-agent workflows via vortexai-l0
  • ⚛️ React Hooks - First-class React integration
  • 📦 Tree-shakeable - Only import what you need
  • 🔒 Type-safe - Full TypeScript support

Installation

npm install @lanonasis/ai-sdk
# or
bun add @lanonasis/ai-sdk

Environment Setup

Add your API key to your environment:

# .env.local (Next.js) or .env
LANONASIS_API_KEY=lano_your_api_key_here

# For client-side usage in Next.js
NEXT_PUBLIC_LANONASIS_API_KEY=lano_your_api_key_here

Once configured, the SDK works seamlessly with no additional setup required.

Quick Start

import { LanonasisAI } from '@lanonasis/ai-sdk';

// Initialize with API key from environment
const ai = new LanonasisAI({
  apiKey: process.env.LANONASIS_API_KEY!,
  // baseUrl defaults to https://api.lanonasis.com
});

// Simple message
const response = await ai.send('Hello, who are you?');
console.log(response);

// Chat with full control
const result = await ai.chat({
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is 2+2?' },
  ],
  model: 'gpt-4o-mini',
  temperature: 0.7,
  maxTokens: 1000,
});

console.log(result.response);

Detailed Examples

Basic Chat Completion

import { LanonasisAI } from '@lanonasis/ai-sdk';

const ai = new LanonasisAI({
  apiKey: process.env.LANONASIS_API_KEY!,
});

// Single turn conversation
const response = await ai.chat({
  messages: [
    { role: 'user', content: 'Explain quantum computing in simple terms' }
  ],
  model: 'gpt-4o-mini',
});

console.log(response.response);
// Output: "Quantum computing is like having a super-powered calculator..."

Multi-turn Conversation

const ai = new LanonasisAI({ apiKey: process.env.LANONASIS_API_KEY! });

// Build conversation history
const messages = [
  { role: 'system', content: 'You are a coding assistant specializing in TypeScript.' },
  { role: 'user', content: 'How do I create a generic function?' },
];

const response1 = await ai.chat({ messages });
console.log(response1.response);

// Continue the conversation
messages.push({ role: 'assistant', content: response1.response });
messages.push({ role: 'user', content: 'Can you show me an example with arrays?' });

const response2 = await ai.chat({ messages });
console.log(response2.response);

With Persistent Memory

The SDK automatically integrates with @lanonasis/memory-sdk-standalone:

const ai = new LanonasisAI({
  apiKey: process.env.LANONASIS_API_KEY!,
  memory: {
    enabled: true,
    autoSave: true,
    contextStrategy: 'relevance',
  },
});

// Store important information
await ai.memory.createMemory({
  title: 'User Preferences',
  content: 'User prefers dark mode and concise responses',
  status: 'active',
});

// Search memories
const memories = await ai.memory.searchMemories({
  query: 'user preferences',
  status: 'active',
  threshold: 0.7,
});

// Chat with memory context
const response = await ai.chat({
  messages: [{ role: 'user', content: 'Remember my preferences?' }],
  conversationId: 'user-123-session',
});

// Get context from memories
const context = await ai.memory.searchWithContext('previous conversations');

Streaming Responses

// Async iterator style
for await (const chunk of ai.chatStream({
  messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

// Callback style
await ai.streamChat(
  { messages: [{ role: 'user', content: 'Count to 10' }] },
  (chunk) => console.log(chunk.choices[0]?.delta?.content)
);

Orchestration

Use vortexai-l0 for complex workflows:

// Remote API orchestration (default)
const result = await ai.orchestrate('Create a viral TikTok campaign');

// Local orchestration (no API call)
const ai = new LanonasisAI({
  apiKey: 'lnss_xxx',
  useLocalOrchestration: true,
});
const result = await ai.orchestrate('analyze trending hashtags');

React Integration

import { useLanonasis, useChat } from '@lanonasis/ai-sdk/react';

function ChatComponent() {
  const { client, isReady } = useLanonasis({
    apiKey: process.env.NEXT_PUBLIC_LANONASIS_KEY!,
  });

  const { messages, send, isLoading, sendWithStream } = useChat({
    client,
    systemPrompt: 'You are a helpful assistant.',
  });

  return (
    <div>
      {messages.map((msg, i) => (
        <div key={i} className={msg.role}>{msg.content}</div>
      ))}
      {isLoading && <div>Thinking...</div>}
    </div>
  );
}

Configuration

new LanonasisAI({
  apiKey: string;              // Required: lano_xxx format
  baseUrl?: string;            // Default: https://api.lanonasis.com
  memoryUrl?: string;          // Default: same as baseUrl
  timeout?: number;            // Default: 30000 (30s)
  maxRetries?: number;         // Default: 3
  debug?: boolean;             // Default: false
  organizationId?: string;     // For multi-tenant setups
  useLocalOrchestration?: bool // Use vortexai-l0 locally
  memory?: {
    enabled?: boolean;         // Default: true
    autoSave?: boolean;        // Default: true
    contextStrategy?: string;  // 'relevance' | 'temporal' | 'hybrid'
    maxContextTokens?: number; // Default: 4000
  };
});

API Endpoints

The SDK connects to these endpoints by default:

FeatureEndpoint
Chat CompletionsPOST /v1/chat/completions
Memory APIGET/POST /api/v1/memory
Health CheckGET /health

All requests are authenticated via the Authorization: Bearer lano_xxx header.

Architecture

This SDK integrates with existing Lanonasis infrastructure:

  • @lanonasis/memory-sdk - Persistent memory and context building
  • vortexai-l0 - Local orchestration engine
  • Backend API - Chat completions and remote orchestration

Migration from v0.1.0

// Old (v0.1.0)
import { AiSDK } from '@lanonasis/ai-sdk';
const sdk = new AiSDK();
await sdk.orchestrate('query');

// New (v0.2.x)
import { LanonasisAI } from '@lanonasis/ai-sdk';
const ai = new LanonasisAI({ apiKey: process.env.LANONASIS_API_KEY! });
await ai.orchestrate('query');
// Or simple:
const message = await ai.send('query');

License

MIT

Memory-Plugin Direct Access (v0.3.0+)

For direct access to L0's memory-plugin APIs:

import { memoryAPI, configureMemoryPlugin } from '@lanonasis/ai-sdk';

// Configure the memory plugin (optional if using LanonasisAI)
configureMemoryPlugin({
  apiUrl: 'https://api.lanonasis.com',
  authToken: 'lano_xxx',
});

// Core operations
await memoryAPI.search('project requirements');
await memoryAPI.create({ title: 'Notes', content: '...', type: 'context' });
await memoryAPI.list({ type: 'project' });

// Intelligence features
await memoryAPI.suggestTags('memory-id');
await memoryAPI.findRelated('memory-id');
await memoryAPI.detectDuplicates(0.9);

// Behavioral features
await memoryAPI.recallBehavior({ task: 'deploy', directory: '/app' });
await memoryAPI.suggestNextAction({ task: 'fix bug', completed: ['found issue'] });
await memoryAPI.recordPattern({ trigger: 'user asked...', actions: [...] });

Advanced (Plugins)

import { LanonasisAI, createPluginManager } from '@lanonasis/ai-sdk';

const plugins = createPluginManager();
plugins.register({
  metadata: { name: 'demo', version: '1.0.0', description: 'Demo plugin' },
  triggers: ['demo'],
  handler: async (ctx) => ({ message: `Hello ${ctx.query}`, type: 'orchestration' })
});

const sdk = new AiSDK({ plugins });
await sdk.orchestrate('demo request');

Smoke test (browser bundling)

Ensures no Node-only deps leak into browser builds.

cd packages/ai-sdk
bun run smoke:web   # esbuild bundle → dist-smoke/index.js

Build

cd packages/ai-sdk
bun run build

Notes

  • Backed by vortexai-l0 browser-safe entry; CLI lives in vortexai-l0/dist/node/cli.js.
  • Now includes memory-plugin re-exports for direct MaaS integration (v0.3.0+).
  • Last validated: 2026-01-23 via bun run build and bun run smoke:web.

Keywords

ai

FAQs

Package last updated on 23 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts