Jodit AI Adapter Service

Universal AI adapter service for Jodit Editor AI Assistant Pro using Vercel AI SDK.
This service provides a secure, server-side proxy for AI providers (OpenAI, DeepSeek, Claude, etc.) that can be used with Jodit Editor's AI Assistant Pro plugin. It handles API key management, authentication, and request routing to various AI providers.
Features
- π Secure API Key Management - API keys stored server-side, not exposed to clients
- π Authentication - Validates API keys (32 characters, A-F0-9-) and referer headers
- π Multi-Provider Support - OpenAI, DeepSeek, Anthropic, Google (extensible)
- π‘ Streaming Support - Real-time streaming responses using Server-Sent Events (SSE)
- π οΈ Tool Calling - Full support for function/tool calling
- π¦ Rate Limiting - Configurable rate limiting with in-memory or Redis backend
- π Distributed Support - Redis-based rate limiting for multi-instance deployments
- π Production Ready - Docker support, TypeScript, comprehensive error handling
- π Logging - Winston-based logging with different levels
- π§ͺ Testing - Jest with comprehensive test coverage
Architecture
βββββββββββββββ ββββββββββββββββββββ βββββββββββββββ
β Jodit β HTTPS β Adapter Service β HTTPS β AI Providerβ
β AI Plugin ββββββββββΊβ (This repo) ββββββββββΊβ (OpenAI) β
βββββββββββββββ ββββββββββββββββββββ βββββββββββββββ
Client Server External
Installation
Using npm
npm install jodit-ai-adapter
Using Docker
docker build -t jodit-ai-adapter .
docker run -p 8082:8082 --env-file .env jodit-ai-adapter
Quick Start
1. Setup Environment
Copy the example environment file:
cp .env.example .env
Edit .env and add your API keys:
PORT=8082
NODE_ENV=development
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_DEFAULT_MODEL=gpt-4o
# CORS (use specific origins in production)
CORS_ORIGIN=*
2. Install Dependencies
npm install
3. Run Development Server
npm run dev
The service will be available at http://localhost:8082
4. Build for Production
npm run build
npm start
Configuration
Environment Variables
PORT | Server port | 8082 |
NODE_ENV | Environment mode | development |
LOG_LEVEL | Logging level | debug (dev), info (prod) |
CORS_ORIGIN | CORS allowed origins | * |
OPENAI_API_KEY | OpenAI API key | - |
OPENAI_DEFAULT_MODEL | Default OpenAI model | gpt-5.1 |
HTTP_PROXY | HTTP/SOCKS5 proxy URL | - |
RATE_LIMIT_ENABLED | Enable rate limiting | false |
RATE_LIMIT_TYPE | Rate limiter type (memory or redis) | memory |
RATE_LIMIT_MAX_REQUESTS | Max requests per window | 100 |
RATE_LIMIT_WINDOW_MS | Time window in ms | 60000 |
REDIS_URL | Redis connection URL | - |
REDIS_PASSWORD | Redis password | - |
REDIS_DB | Redis database number | 0 |
CONFIG_FILE | Path to JSON config file | - |
Configuration File
You can use a JSON configuration file instead of environment variables:
{
"port": 8082,
"debug": true,
"requestTimeout": 120000,
"maxRetries": 3,
"corsOrigin": "*",
"requireReferer": false,
"providers": {
"openai": {
"type": "openai",
"defaultModel": "gpt-4o",
"apiKey": "sk-..."
}
}
}
Load it with:
CONFIG_FILE=./config.json npm start
API Endpoints
Health Check
GET /health
Returns service status and available providers.
Response:
{
"status": "ok",
"timestamp": "2025-01-22T10:30:00.000Z",
"providers": ["openai"]
}
AI Request (Streaming)
POST /ai/request
Content-Type: application/json
Authorization: Bearer YOUR-API-KEY-32-CHARS
Request Body:
{
"provider": "openai",
"context": {
"mode": "full",
"conversationId": "conv_123",
"messages": [
{
"id": "msg_1",
"role": "user",
"content": "Hello!",
"timestamp": 1234567890
}
],
"tools": [],
"conversationOptions": {
"model": "gpt-4o",
"temperature": 0.7
},
"instructions": "You are a helpful assistant."
}
}
Streaming Response (SSE):
event: created
data: {"type":"created","response":{"responseId":"resp_123","content":"","finished":false}}
event: text-delta
data: {"type":"text-delta","delta":"Hello"}
event: text-delta
data: {"type":"text-delta","delta":"!"}
event: completed
data: {"type":"completed","response":{"responseId":"resp_123","content":"Hello!","finished":true}}
Provider Info
GET /ai/providers
Authorization: Bearer YOUR-API-KEY-32-CHARS
Returns configured providers and their settings.
Authentication
The service validates:
- API Key Format: Must be 32 characters containing A-F, 0-9, and hyphens
- API Key Header: Sent via
Authorization: Bearer <key> or x-api-key: <key>
- Custom Validation: Optional
checkAuthentication callback
Custom Authentication
import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
checkAuthentication: async (apiKey, referer, request) => {
const user = await db.users.findByApiKey(apiKey);
if (!user || !user.active) {
return null;
}
return user.id;
}
});
Usage Tracking
Track AI usage (tokens, costs) with a callback:
import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
checkAuthentication: async (apiKey, referer) => {
const user = await db.users.findByApiKey(apiKey);
return user?.id || null;
},
onUsage: async (stats) => {
await db.usage.create({
userId: stats.userId,
provider: stats.provider,
model: stats.model,
conversationId: stats.conversationId,
promptTokens: stats.promptTokens,
completionTokens: stats.completionTokens,
totalTokens: stats.totalTokens,
duration: stats.duration,
timestamp: new Date(stats.timestamp)
});
if (stats.totalTokens) {
await db.users.decrementTokens(stats.userId, stats.totalTokens);
}
console.log(`User ${stats.userId} used ${stats.totalTokens} tokens`);
}
});
Usage Stats Interface:
interface UsageStats {
userId: string;
apiKey: string;
provider: string;
model: string;
conversationId: string;
responseId: string;
promptTokens?: number;
completionTokens?: number;
totalTokens?: number;
timestamp: number;
duration: number;
metadata?: Record<string, unknown>;
}
Rate Limiting
The service includes built-in rate limiting to prevent abuse and manage resource usage. Rate limiting can be configured to use either in-memory storage (for single-instance deployments) or Redis (for distributed/multi-instance deployments).
Configuration
In-Memory Rate Limiting (Single Instance)
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=memory
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000
This configuration allows 100 requests per minute per user/IP address.
Redis Rate Limiting (Distributed)
For production deployments with multiple instances, use Redis:
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000
REDIS_URL=redis://localhost:6379
REDIS_PASSWORD=your-password
REDIS_DB=0
Using Docker Compose with Redis
For development, use the provided Docker Compose configuration:
docker-compose -f docker-compose.dev.yml up -d
docker-compose -f docker-compose.dev.yml up -d
Then configure your app to use Redis:
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
REDIS_URL=redis://localhost:6379
Programmatic Configuration
import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
rateLimit: {
enabled: true,
type: 'redis',
maxRequests: 100,
windowMs: 60000,
redisUrl: 'redis://localhost:6379',
keyPrefix: 'rl:'
},
providers: {
openai: {
type: 'openai',
apiKey: process.env.OPENAI_API_KEY
}
}
});
When rate limiting is enabled, the following headers are included in responses:
X-RateLimit-Limit: Maximum requests allowed in the window
X-RateLimit-Remaining: Remaining requests in current window
X-RateLimit-Reset: ISO 8601 timestamp when the rate limit resets
Retry-After: Seconds to wait before retrying (only when limit exceeded)
Rate Limit Response
When rate limit is exceeded, the service returns a 429 Too Many Requests error:
{
"success": false,
"error": {
"code": 429,
"message": "Too many requests, please try again later",
"details": {
"limit": 100,
"current": 101,
"resetTime": 45000
}
}
}
By default, rate limiting uses:
- User ID (if authenticated via
checkAuthentication callback)
- IP Address (fallback if no user ID)
This means authenticated users are tracked by their user ID, while anonymous requests are tracked by IP address.
Custom Rate Limiting
You can implement custom rate limiting logic:
import { start, MemoryRateLimiter } from 'jodit-ai-adapter';
const rateLimiter = new MemoryRateLimiter({
maxRequests: 100,
windowMs: 60000,
skip: (key) => {
return key.startsWith('user:admin-');
}
});
await start({
port: 8082,
});
Client Integration (Jodit)
Basic Setup
import { Jodit } from 'jodit-pro';
const editor = Jodit.make('#editor', {
aiAssistantPro: {
apiRequest: async (context, signal) => {
const response = await fetch('http://localhost:8082/ai/request', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR-API-KEY-32-CHARS'
},
body: JSON.stringify({
provider: 'openai',
context
}),
signal
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
}
}
});
With Streaming Support
See docs/client-integration.md for complete examples.
Development
Project Structure
jodit-ai-adapter/
βββ src/
β βββ adapters/ # AI provider adapters
β β βββ base-adapter.ts
β β βββ openai-adapter.ts
β β βββ adapter-factory.ts
β βββ middlewares/ # Express middlewares
β β βββ auth.ts
β β βββ cors.ts
β βββ types/ # TypeScript types
β β βββ jodit-ai.ts
β β βββ config.ts
β β βββ index.ts
β βββ helpers/ # Utility functions
β β βββ logger.ts
β βββ config/ # Configuration
β β βββ default-config.ts
β βββ app.ts # Express app setup
β βββ index.ts # Main entry point
β βββ run.ts # CLI runner
βββ docs/ # Documentation
βββ tests/ # Test files
βββ Dockerfile
βββ package.json
βββ tsconfig.json
Available Scripts
npm run dev
npm run build
npm start
npm test
npm run test:watch
npm run test:coverage
npm run lint
npm run lint:fix
npm run format
npm run docker:build
npm run docker:run
Adding a New Provider
- Create adapter class extending
BaseAdapter:
import { BaseAdapter } from './base-adapter';
export class DeepSeekAdapter extends BaseAdapter {
protected async processRequest(context, signal) {
}
}
AdapterFactory.adapters.set('deepseek', DeepSeekAdapter);
providers: {
deepseek: {
type: 'deepseek',
apiKey: process.env.DEEPSEEK_API_KEY,
defaultModel: 'deepseek-chat'
}
}
Testing
Run Tests
npm test
Example Test with nock
import nock from 'nock';
import { OpenAIAdapter } from '../adapters/openai-adapter';
describe('OpenAIAdapter', () => {
it('should handle streaming response', async () => {
nock('https://api.openai.com')
.post('/v1/chat/completions')
.reply(200, {
});
const adapter = new OpenAIAdapter({
apiKey: 'test-key'
});
const result = await adapter.handleRequest(context, signal);
expect(result.mode).toBe('stream');
});
});
Security Best Practices
- Never expose API keys in client-side code
- Use HTTPS in production
- Configure CORS properly - Don't use
* in production
- Implement rate limiting (e.g., using express-rate-limit)
- Validate referer headers when
requireReferer: true
- Use environment variables for sensitive data
- Implement custom authentication for production use
Deployment
Docker
docker build -t jodit-ai-adapter .
docker run -d \
-p 8082:8082 \
-e OPENAI_API_KEY=sk-... \
--name jodit-ai-adapter \
jodit-ai-adapter
Docker Compose
version: '3.8'
services:
jodit-ai-adapter:
build: .
ports:
- "8082:8082"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- NODE_ENV=production
restart: unless-stopped
Troubleshooting
Common Issues
API Key Invalid Format
- Ensure your API key is exactly 32 characters
- Must contain only A-F, 0-9, and hyphens
CORS Errors
- Check
CORS_ORIGIN configuration
- Ensure client origin is allowed
Streaming Not Working
- Check that client properly handles SSE
- Verify
Content-Type: text/event-stream header
Provider Not Found
- Ensure provider is configured in
providers object
- Check provider name matches exactly (case-sensitive)
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for details.
License
MIT License - see LICENSE for details.
Author
Chupurnov Valeriy chupurnov@gmail.com
Links