Latest Threat Research:SANDWORM_MODE: Shai-Hulud-Style npm Worm Hijacks CI Workflows and Poisons AI Toolchains.Details
Socket
Book a DemoInstallSign in
Socket

open-sse

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install
Package was removed
Sorry, it seems this package was removed from the registry

open-sse

Universal AI proxy library with SSE streaming support for OpenAI, Claude, Gemini and more

latest
Source
npmnpm
Version
1.0.2
Version published
Weekly downloads
0
Maintainers
1
Weekly downloads
 
Created
Source

open-sse

Universal AI proxy library with SSE streaming support for OpenAI, Claude, Gemini and more.

Installation

npm install open-sse

Features

  • Multi-provider support (OpenAI, Claude, Gemini, Copilot, Codex, etc.)
  • SSE streaming for real-time responses
  • Automatic format translation between providers
  • Token refresh & OAuth management
  • Account fallback handling
  • Combo models (route across multiple providers)

Quick Start

Basic Usage

import { handleChatCore, getModelInfoCore } from "open-sse";

async function handleChat(request) {
  const body = await request.json();
  
  // Get model info (auto-detect provider)
  const modelInfo = await getModelInfoCore(body.model);
  
  // Provider credentials
  const credentials = {
    provider: modelInfo.provider,
    accessToken: "your-token"
  };
  
  // Handle chat with auto translation & streaming
  return await handleChatCore(body, credentials, console);
}

With Token Refresh

import { handleChatCore, isTokenExpiringSoon, refreshTokenByProvider } from "open-sse";

async function handleChat(request, credentials) {
  const body = await request.json();
  
  // Auto refresh if expiring
  if (isTokenExpiringSoon(credentials)) {
    const newTokens = await refreshTokenByProvider(
      credentials.provider,
      credentials,
      console
    );
    credentials = { ...credentials, ...newTokens };
  }
  
  return await handleChatCore(body, credentials, console);
}

Format Translation

import { translateRequest, translateResponse } from "open-sse";

// OpenAI → Claude
const claudeRequest = await translateRequest(openAIRequest, "openai", "claude");

// Claude → OpenAI
const openAIResponse = await translateResponse(claudeResponse, "claude", "openai");

Combo Models

import { handleComboChat } from "open-sse/services/combo.js";

const models = [
  { provider: "claude", model: "claude-3-5-sonnet-20241022" },
  { provider: "openai", model: "gpt-4" }
];

const response = await handleComboChat(request, models, getCredentials, console);

Main Exports

// Handlers
import { handleChatCore, isTokenExpiringSoon } from "open-sse";

// Services
import { getModelInfoCore, parseModel } from "open-sse";
import { buildProviderUrl, buildProviderHeaders, detectFormat } from "open-sse";
import { refreshTokenByProvider, refreshClaudeOAuthToken } from "open-sse";
import { checkFallbackError, isAccountUnavailable } from "open-sse";

// Translation
import { translateRequest, translateResponse, needsTranslation } from "open-sse";

// Utils
import { errorResponse } from "open-sse";
import { createSSETransformStreamWithLogger } from "open-sse";

Configuration

Environment Variables

# Enable detailed request/response logging (default: false)
ENABLE_REQUEST_LOGS=true

When enabled, logs are saved to logs/ directory with structure:

logs/
  └── {sourceFormat}_{targetFormat}_{model}_{timestamp}/
      ├── 0_client_raw_request.json
      ├── 1_raw_request.json
      ├── 2_converted_request.json
      ├── 3_raw_response.json
      └── 4_converted_response.json

Provider Models

import { PROVIDER_MODELS, getProviderModels } from "open-sse";

const claudeModels = getProviderModels("claude");

Constants

import { PROVIDERS, OAUTH_ENDPOINTS, CACHE_TTL } from "open-sse";

Examples

Next.js API Route

// app/api/chat/route.js
import { handleChatCore, getModelInfoCore } from "open-sse";

export async function POST(request) {
  const body = await request.json();
  const modelInfo = await getModelInfoCore(body.model);
  
  const credentials = {
    provider: modelInfo.provider,
    accessToken: process.env.API_TOKEN
  };
  
  return await handleChatCore(body, credentials, console);
}

Express.js

import express from "express";
import { handleChatCore, getModelInfoCore } from "open-sse";

const app = express();

app.post("/api/chat", async (req, res) => {
  const modelInfo = await getModelInfoCore(req.body.model);
  
  const credentials = {
    provider: modelInfo.provider,
    accessToken: process.env.API_TOKEN
  };
  
  const response = await handleChatCore(req.body, credentials, console);
  return res.send(response);
});

Supported Providers

  • OpenAI (GPT-4, GPT-3.5)
  • Anthropic Claude (Sonnet, Opus, Haiku)
  • Google Gemini
  • GitHub Copilot
  • Codex
  • Qwen

License

MIT

Keywords

ai

FAQs

Package last updated on 05 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts