🚨 Shai-Hulud Strikes Again:834 Packages Compromised.Technical Analysis β†’
Socket
Book a DemoInstallSign in
Socket

jodit-ai-adapter

Package Overview
Dependencies
Maintainers
1
Versions
8
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

jodit-ai-adapter

Universal AI adapter service for Jodit Editor AI Assistant Pro using Vercel AI SDK

latest
Source
npmnpm
Version
0.1.16
Version published
Maintainers
1
Created
Source

Jodit AI Adapter Service

CI npm version License: MIT Docker

Universal AI adapter service for Jodit Editor AI Assistant Pro using Vercel AI SDK.

This service provides a secure, server-side proxy for AI providers (OpenAI, DeepSeek, Claude, etc.) that can be used with Jodit Editor's AI Assistant Pro plugin. It handles API key management, authentication, and request routing to various AI providers.

Features

  • πŸ”’ Secure API Key Management - API keys stored server-side, not exposed to clients
  • πŸ”‘ Authentication - Validates API keys (32 characters, A-F0-9-) and referer headers
  • 🌐 Multi-Provider Support - OpenAI, DeepSeek, Anthropic, Google (extensible)
  • πŸ“‘ Streaming Support - Real-time streaming responses using Server-Sent Events (SSE)
  • πŸ› οΈ Tool Calling - Full support for function/tool calling
  • 🚦 Rate Limiting - Configurable rate limiting with in-memory or Redis backend
  • πŸ”„ Distributed Support - Redis-based rate limiting for multi-instance deployments
  • πŸš€ Production Ready - Docker support, TypeScript, comprehensive error handling
  • πŸ“Š Logging - Winston-based logging with different levels
  • πŸ§ͺ Testing - Jest with comprehensive test coverage

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Jodit     β”‚ HTTPS   β”‚  Adapter Service β”‚  HTTPS  β”‚  AI Providerβ”‚
β”‚  AI Plugin  β”œβ”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚   (This repo)    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚  (OpenAI)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     Client                    Server                     External

Installation

Using npm

npm install jodit-ai-adapter

Using Docker

docker build -t jodit-ai-adapter .
docker run -p 8082:8082 --env-file .env jodit-ai-adapter

Quick Start

1. Setup Environment

Copy the example environment file:

cp .env.example .env

Edit .env and add your API keys:

PORT=8082
NODE_ENV=development

# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_DEFAULT_MODEL=gpt-4o

# CORS (use specific origins in production)
CORS_ORIGIN=*

2. Install Dependencies

npm install

3. Run Development Server

npm run dev

The service will be available at http://localhost:8082

4. Build for Production

npm run build
npm start

Configuration

Environment Variables

VariableDescriptionDefault
PORTServer port8082
NODE_ENVEnvironment modedevelopment
LOG_LEVELLogging leveldebug (dev), info (prod)
CORS_ORIGINCORS allowed origins*
OPENAI_API_KEYOpenAI API key-
OPENAI_DEFAULT_MODELDefault OpenAI modelgpt-5.1
HTTP_PROXYHTTP/SOCKS5 proxy URL-
RATE_LIMIT_ENABLEDEnable rate limitingfalse
RATE_LIMIT_TYPERate limiter type (memory or redis)memory
RATE_LIMIT_MAX_REQUESTSMax requests per window100
RATE_LIMIT_WINDOW_MSTime window in ms60000
REDIS_URLRedis connection URL-
REDIS_PASSWORDRedis password-
REDIS_DBRedis database number0
CONFIG_FILEPath to JSON config file-

Configuration File

You can use a JSON configuration file instead of environment variables:

{
  "port": 8082,
  "debug": true,
  "requestTimeout": 120000,
  "maxRetries": 3,
  "corsOrigin": "*",
  "requireReferer": false,
  "providers": {
    "openai": {
      "type": "openai",
      "defaultModel": "gpt-4o",
      "apiKey": "sk-..."
    }
  }
}

Load it with:

CONFIG_FILE=./config.json npm start

API Endpoints

Health Check

GET /health

Returns service status and available providers.

Response:

{
  "status": "ok",
  "timestamp": "2025-01-22T10:30:00.000Z",
  "providers": ["openai"]
}

AI Request (Streaming)

POST /ai/request
Content-Type: application/json
Authorization: Bearer YOUR-API-KEY-32-CHARS

Request Body:

{
  "provider": "openai",
  "context": {
    "mode": "full",
    "conversationId": "conv_123",
    "messages": [
      {
        "id": "msg_1",
        "role": "user",
        "content": "Hello!",
        "timestamp": 1234567890
      }
    ],
    "tools": [],
    "conversationOptions": {
      "model": "gpt-4o",
      "temperature": 0.7
    },
    "instructions": "You are a helpful assistant."
  }
}

Streaming Response (SSE):

event: created
data: {"type":"created","response":{"responseId":"resp_123","content":"","finished":false}}

event: text-delta
data: {"type":"text-delta","delta":"Hello"}

event: text-delta
data: {"type":"text-delta","delta":"!"}

event: completed
data: {"type":"completed","response":{"responseId":"resp_123","content":"Hello!","finished":true}}

Provider Info

GET /ai/providers
Authorization: Bearer YOUR-API-KEY-32-CHARS

Returns configured providers and their settings.

Authentication

The service validates:

  • API Key Format: Must be 32 characters containing A-F, 0-9, and hyphens
  • API Key Header: Sent via Authorization: Bearer <key> or x-api-key: <key>
  • Custom Validation: Optional checkAuthentication callback

Custom Authentication

import { start } from 'jodit-ai-adapter';

await start({
  port: 8082,
  checkAuthentication: async (apiKey, referer, request) => {
    // Validate API key against your database
    const user = await db.users.findByApiKey(apiKey);

    if (!user || !user.active) {
      return null; // Reject
    }

    return user.id; // Accept and return user ID
  }
});

Usage Tracking

Track AI usage (tokens, costs) with a callback:

import { start } from 'jodit-ai-adapter';

await start({
  port: 8082,
  checkAuthentication: async (apiKey, referer) => {
    const user = await db.users.findByApiKey(apiKey);
    return user?.id || null;
  },
  onUsage: async (stats) => {
    // Save usage statistics to database
    await db.usage.create({
      userId: stats.userId,
      provider: stats.provider,
      model: stats.model,
      conversationId: stats.conversationId,
      promptTokens: stats.promptTokens,
      completionTokens: stats.completionTokens,
      totalTokens: stats.totalTokens,
      duration: stats.duration,
      timestamp: new Date(stats.timestamp)
    });

    // Update user's token balance
    if (stats.totalTokens) {
      await db.users.decrementTokens(stats.userId, stats.totalTokens);
    }

    console.log(`User ${stats.userId} used ${stats.totalTokens} tokens`);
  }
});

Usage Stats Interface:

interface UsageStats {
  userId: string;              // User ID from authentication
  apiKey: string;              // API key used
  provider: string;            // AI provider (openai, deepseek, etc.)
  model: string;               // Model used (gpt-4o, etc.)
  conversationId: string;      // Conversation ID
  responseId: string;          // Response ID
  promptTokens?: number;       // Input tokens
  completionTokens?: number;   // Output tokens
  totalTokens?: number;        // Total tokens
  timestamp: number;           // Request timestamp (ms)
  duration: number;            // Request duration (ms)
  metadata?: Record<string, unknown>; // Additional data
}

Rate Limiting

The service includes built-in rate limiting to prevent abuse and manage resource usage. Rate limiting can be configured to use either in-memory storage (for single-instance deployments) or Redis (for distributed/multi-instance deployments).

Configuration

In-Memory Rate Limiting (Single Instance)

RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=memory
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000

This configuration allows 100 requests per minute per user/IP address.

Redis Rate Limiting (Distributed)

For production deployments with multiple instances, use Redis:

RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000
REDIS_URL=redis://localhost:6379
REDIS_PASSWORD=your-password
REDIS_DB=0

Using Docker Compose with Redis

For development, use the provided Docker Compose configuration:

# Start Redis only
docker-compose -f docker-compose.dev.yml up -d

# Start Redis with monitoring UI
docker-compose -f docker-compose.dev.yml up -d
# Access Redis Commander at http://localhost:8081

Then configure your app to use Redis:

RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
REDIS_URL=redis://localhost:6379

Programmatic Configuration

import { start } from 'jodit-ai-adapter';

await start({
  port: 8082,
  rateLimit: {
    enabled: true,
    type: 'redis',
    maxRequests: 100,
    windowMs: 60000, // 1 minute
    redisUrl: 'redis://localhost:6379',
    keyPrefix: 'rl:'
  },
  providers: {
    openai: {
      type: 'openai',
      apiKey: process.env.OPENAI_API_KEY
    }
  }
});

Rate Limit Headers

When rate limiting is enabled, the following headers are included in responses:

  • X-RateLimit-Limit: Maximum requests allowed in the window
  • X-RateLimit-Remaining: Remaining requests in current window
  • X-RateLimit-Reset: ISO 8601 timestamp when the rate limit resets
  • Retry-After: Seconds to wait before retrying (only when limit exceeded)

Rate Limit Response

When rate limit is exceeded, the service returns a 429 Too Many Requests error:

{
  "success": false,
  "error": {
    "code": 429,
    "message": "Too many requests, please try again later",
    "details": {
      "limit": 100,
      "current": 101,
      "resetTime": 45000
    }
  }
}

Key Extraction

By default, rate limiting uses:

  • User ID (if authenticated via checkAuthentication callback)
  • IP Address (fallback if no user ID)

This means authenticated users are tracked by their user ID, while anonymous requests are tracked by IP address.

Custom Rate Limiting

You can implement custom rate limiting logic:

import { start, MemoryRateLimiter } from 'jodit-ai-adapter';

// Create custom rate limiter with skip function
const rateLimiter = new MemoryRateLimiter({
  maxRequests: 100,
  windowMs: 60000,
  skip: (key) => {
    // Skip rate limiting for admin users
    return key.startsWith('user:admin-');
  }
});

await start({
  port: 8082,
  // ... other config
});

Client Integration (Jodit)

Basic Setup

import { Jodit } from 'jodit-pro';

const editor = Jodit.make('#editor', {
  aiAssistantPro: {
    apiRequest: async (context, signal) => {
      const response = await fetch('http://localhost:8082/ai/request', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': 'Bearer YOUR-API-KEY-32-CHARS'
        },
        body: JSON.stringify({
          provider: 'openai',
          context
        }),
        signal
      });

      // Handle streaming
      const reader = response.body.getReader();
      const decoder = new TextDecoder();

      // ... streaming logic (see full example in docs)
    }
  }
});

With Streaming Support

See docs/client-integration.md for complete examples.

Development

Project Structure

jodit-ai-adapter/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ adapters/          # AI provider adapters
β”‚   β”‚   β”œβ”€β”€ base-adapter.ts
β”‚   β”‚   β”œβ”€β”€ openai-adapter.ts
β”‚   β”‚   └── adapter-factory.ts
β”‚   β”œβ”€β”€ middlewares/       # Express middlewares
β”‚   β”‚   β”œβ”€β”€ auth.ts
β”‚   β”‚   └── cors.ts
β”‚   β”œβ”€β”€ types/            # TypeScript types
β”‚   β”‚   β”œβ”€β”€ jodit-ai.ts
β”‚   β”‚   β”œβ”€β”€ config.ts
β”‚   β”‚   └── index.ts
β”‚   β”œβ”€β”€ helpers/          # Utility functions
β”‚   β”‚   └── logger.ts
β”‚   β”œβ”€β”€ config/           # Configuration
β”‚   β”‚   └── default-config.ts
β”‚   β”œβ”€β”€ app.ts            # Express app setup
β”‚   β”œβ”€β”€ index.ts          # Main entry point
β”‚   └── run.ts            # CLI runner
β”œβ”€β”€ docs/                 # Documentation
β”œβ”€β”€ tests/                # Test files
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ package.json
└── tsconfig.json

Available Scripts

npm run dev              # Start development server with hot reload
npm run build            # Build for production
npm start                # Start production server
npm test                 # Run tests
npm run test:watch       # Run tests in watch mode
npm run test:coverage    # Run tests with coverage
npm run lint             # Lint code
npm run lint:fix         # Lint and fix code
npm run format           # Format code with Prettier
npm run docker:build     # Build Docker image
npm run docker:run       # Run Docker container

Adding a New Provider

  • Create adapter class extending BaseAdapter:
// src/adapters/deepseek-adapter.ts
import { BaseAdapter } from './base-adapter';

export class DeepSeekAdapter extends BaseAdapter {
  protected async processRequest(context, signal) {
    // Implementation using Vercel AI SDK
  }
}
  • Register in factory:
// src/adapters/adapter-factory.ts
AdapterFactory.adapters.set('deepseek', DeepSeekAdapter);
  • Add configuration:
// src/config/default-config.ts
providers: {
  deepseek: {
    type: 'deepseek',
    apiKey: process.env.DEEPSEEK_API_KEY,
    defaultModel: 'deepseek-chat'
  }
}

Testing

Run Tests

npm test

Example Test with nock

import nock from 'nock';
import { OpenAIAdapter } from '../adapters/openai-adapter';

describe('OpenAIAdapter', () => {
  it('should handle streaming response', async () => {
    // Mock OpenAI API
    nock('https://api.openai.com')
      .post('/v1/chat/completions')
      .reply(200, {
        // Mock response
      });

    const adapter = new OpenAIAdapter({
      apiKey: 'test-key'
    });

    // Test adapter
    const result = await adapter.handleRequest(context, signal);
    expect(result.mode).toBe('stream');
  });
});

Security Best Practices

  • Never expose API keys in client-side code
  • Use HTTPS in production
  • Configure CORS properly - Don't use * in production
  • Implement rate limiting (e.g., using express-rate-limit)
  • Validate referer headers when requireReferer: true
  • Use environment variables for sensitive data
  • Implement custom authentication for production use

Deployment

Docker

# Build
docker build -t jodit-ai-adapter .

# Run
docker run -d \
  -p 8082:8082 \
  -e OPENAI_API_KEY=sk-... \
  --name jodit-ai-adapter \
  jodit-ai-adapter

Docker Compose

version: '3.8'
services:
  jodit-ai-adapter:
    build: .
    ports:
      - "8082:8082"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - NODE_ENV=production
    restart: unless-stopped

Troubleshooting

Common Issues

API Key Invalid Format

  • Ensure your API key is exactly 32 characters
  • Must contain only A-F, 0-9, and hyphens

CORS Errors

  • Check CORS_ORIGIN configuration
  • Ensure client origin is allowed

Streaming Not Working

  • Check that client properly handles SSE
  • Verify Content-Type: text/event-stream header

Provider Not Found

  • Ensure provider is configured in providers object
  • Check provider name matches exactly (case-sensitive)

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for details.

License

MIT License - see LICENSE for details.

Author

Chupurnov Valeriy chupurnov@gmail.com

Keywords

jodit

FAQs

Package last updated on 24 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts