πŸš€ DAY 5 OF LAUNCH WEEK:Introducing Webhook Events for Alert Changes.Learn more β†’
Socket
Book a DemoInstallSign in
Socket

@samchon/openapi

Package Overview
Dependencies
Maintainers
1
Versions
231
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@samchon/openapi

OpenAPI definitions and converters for 'typia' and 'nestia'.

next
Source
npmnpm
Version
5.0.0-dev.20251107-3
Version published
Weekly downloads
182K
-1.3%
Maintainers
1
Weekly downloads
Β 
Created
Source

@samchon/openapi

flowchart
  subgraph "OpenAPI Specification"
    v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
    v30("OpenAPI v3.0") --upgrades--> emended
    v31("OpenAPI v3.1") --emends--> emended
  end
  subgraph "OpenAPI Generator"
    emended --normalizes--> migration[["Migration Schema"]]
    migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
    lfc --"OpenAI"--> chatgpt("ChatGPT")
    lfc --"Google"--> gemini("Gemini")
    lfc --"Anthropic"--> claude("Claude")
    lfc --"<i>Google</i>" --> legacy_gemini("<i> (legacy) Gemini</i>")
    legacy_gemini --"3.0" --> custom(["Custom JSON Schema"])
    chatgpt --"3.1"--> custom
    gemini --"3.1"--> standard(["Standard JSON Schema"])
    claude --"3.1"--> standard
  end

GitHub license npm version Downloads Build Status API Documents Discord Badge

Transform OpenAPI documents into type-safe LLM function calling applications.

@samchon/openapi converts any version of OpenAPI/Swagger documents into LLM function calling schemas for OpenAI GPT, Claude, and Gemini. It supports every OpenAPI version (Swagger 2.0, OpenAPI 3.0, and OpenAPI 3.1) with full TypeScript type definitions. The library also works with MCP (Model Context Protocol) servers, enabling seamless AI agent development.

Key Features:

  • Universal OpenAPI Support: Works with Swagger 2.0, OpenAPI 3.0, and OpenAPI 3.1
  • LLM Function Calling: Auto-generates function schemas for OpenAI, Claude, and Gemini
  • Type-Safe Validation: Built-in validation with detailed error feedback for LLM responses
  • MCP Integration: Compose function calling schemas from MCP servers
  • Emended Specification: Standardized OpenAPI v3.1 format that removes ambiguities

Live Demo:

https://github.com/user-attachments/assets/e1faf30b-c703-4451-b68b-2e7a8170bce5

Watch how @samchon/openapi powers an AI shopping chatbot with @agentica

Quick Start

npm install @samchon/openapi

Transform your OpenAPI document into an LLM function calling application in just a few lines:

import { HttpLlm, OpenApi } from "@samchon/openapi";

// Load and convert your OpenAPI document
const document: OpenApi.IDocument = OpenApi.convert(swagger);

// Generate LLM function calling schemas
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
  model: "chatgpt", // "chatgpt" | "claude" | "gemini"
  document,
});

// Find a function by path and method
const func: IHttpLlmFunction<"chatgpt"> | undefined = application.functions.find(
  (f) => f.path === "/bbs/articles" && f.method === "post"
);

// Execute the function with LLM-composed arguments
const result: unknown = await HttpLlm.execute({
  connection: { host: "http://localhost:3000" },
  application,
  function: func,
  arguments: llmGeneratedArgs, // from OpenAI/Claude/Gemini
});

That's it! Your HTTP backend is now callable by AI.

OpenAPI Definitions

@samchon/openapi provides complete TypeScript definitions for all OpenAPI versions and introduces an "emended" OpenAPI v3.1 specification that serves as a universal intermediate format.

flowchart
  v20(Swagger v2.0) --upgrades--> emended[["<b><u>OpenAPI v3.1 (emended)</u></b>"]]
  v30(OpenAPI v3.0) --upgrades--> emended
  v31(OpenAPI v3.1) --emends--> emended
  emended --downgrades--> v20d(Swagger v2.0)
  emended --downgrades--> v30d(Swagger v3.0)

Supported Specifications:

What is "Emended" OpenAPI?

The emended specification removes ambiguities and duplications from OpenAPI v3.1, creating a cleaner, more consistent format. All conversions flow through this intermediate format.

Key Improvements:

  • Operations: Merges parameters from path and operation levels, resolves all references
  • JSON Schema: Eliminates mixed types, unifies nullable handling, standardizes array/tuple representations
  • Schema Composition: Consolidates anyOf, oneOf, allOf patterns into simpler structures

Converting Between Versions

import { OpenApi } from "@samchon/openapi";

// Convert any version to emended format
const emended: OpenApi.IDocument = OpenApi.convert(swagger); // Swagger 2.0/3.0/3.1

// Downgrade to older versions if needed
const v30: OpenApiV3.IDocument = OpenApi.downgrade(emended, "3.0");
const v20: SwaggerV2.IDocument = OpenApi.downgrade(emended, "2.0");

Validating OpenAPI Documents

Use typia for runtime validation with detailed type checking - far more accurate than other validators:

import { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from "@samchon/openapi";
import typia from "typia";

const document: any = await fetch("swagger.json").then(r => r.json());

// Validate with detailed error messages
const result: typia.IValidation<SwaggerV2.IDocument | OpenApiV3.IDocument | OpenApiV3_1.IDocument> =
  typia.validate<SwaggerV2.IDocument | OpenApiV3.IDocument | OpenApiV3_1.IDocument>(document);

if (result.success) {
  const emended: OpenApi.IDocument = OpenApi.convert(result.data);
} else {
  console.error(result.errors); // Detailed validation errors
}

Try it in the playground: Type assertion | Detailed validation

LLM Function Calling

flowchart
  subgraph "OpenAPI Specification"
    v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
    v30("OpenAPI v3.0") --upgrades--> emended
    v31("OpenAPI v3.1") --emends--> emended
  end
  subgraph "OpenAPI Generator"
    emended --normalizes--> migration[["Migration Schema"]]
    migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
    lfc --"OpenAI"--> chatgpt("ChatGPT")
    lfc --"Google"--> gemini("Gemini")
    lfc --"Anthropic"--> claude("Claude")
    lfc --"<i>Google</i>" --> legacy_gemini("<i> (legacy) Gemini</i>")
    legacy_gemini --"3.0" --> custom(["Custom JSON Schema"])
    chatgpt --"3.1"--> custom
    gemini --"3.1"--> standard(["Standard JSON Schema"])
    claude --"3.1"--> standard
  end

Turn your HTTP backend into an AI-callable service. @samchon/openapi converts your OpenAPI document into function schemas that OpenAI, Claude, and Gemini can understand and call.

Supported AI Models

IChatGptSchema - For OpenAI GPT

  • Fully compatible with OpenAI's strict mode
  • Uses JSDoc tags in description to bypass OpenAI's schema limitations

IClaudeSchema - For Anthropic Claude ⭐ Recommended

  • Follows JSON Schema standard most closely
  • No artificial restrictions - cleanest type definitions
  • Ideal default choice when you're unsure which model to use
    • working on every models unless OpenAI's strict mode or legacy Gemini

IGeminiSchema - For Google Gemini

  • Supports nearly all JSON Schema specifications (as of Nov 2025)
  • Previous versions had severe restrictions, but these are now removed

[!NOTE]

You can also compose ILlmApplication from a TypeScript class using typia.

https://typia.io/docs/llm/application

import { ILlmApplication } from "@samchon/openapi";
import typia from "typia";

const app: ILlmApplication<"chatgpt"> =
  typia.llm.application<YourClassType, "chatgpt">();

Complete Example

Here's a full example showing how OpenAI GPT selects a function, fills arguments, and you execute it:

Resources:

import { HttpLlm, OpenApi, IHttpLlmApplication, IHttpLlmFunction } from "@samchon/openapi";
import OpenAI from "openai";

// 1. Convert OpenAPI to LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> =
  HttpLlm.application({
    model: "chatgpt",
    document,
  });

// 2. Find the function by path and method
const func: IHttpLlmFunction<"chatgpt"> | undefined = application.functions.find(
  (f) => f.path === "/shoppings/sellers/sale" && f.method === "post"
);
if (!func) throw new Error("Function not found");

// 3. Let OpenAI GPT call the function
const client: OpenAI = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion: OpenAI.ChatCompletion = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful shopping assistant." },
    { role: "user", content: "I want to sell Microsoft Surface Pro 9..." }
  ],
  tools: [{
    type: "function",
    function: {
      name: func.name,
      description: func.description,
      parameters: func.parameters,
    }
  }],
});

// 4. Execute the function call on your actual server
const toolCall: OpenAI.ChatCompletionMessageToolCall =
  completion.choices[0].message.tool_calls![0];
const result: unknown = await HttpLlm.execute({
  connection: { host: "http://localhost:37001" },
  application,
  function: func,
  input: JSON.parse(toolCall.function.arguments),
});

Validation Feedback - Fixing LLM Mistakes

The Problem: LLMs make type errors. A lot.

Even when your schema says Array<string>, GPT might return just "string". In real-world testing with OpenAI GPT-4o-mini on a shopping service:

  • 1st attempt: 70% success rate ❌
  • 2nd attempt (with validation feedback): 98% success rate βœ…
  • 3rd attempt: Never failed βœ…

The Solution: Validate LLM output and send errors back for correction.

import { HttpLlm, OpenApi, IHttpLlmApplication, IHttpLlmFunction, IValidation } from "@samchon/openapi";

// Setup application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
  model: "chatgpt",
  document,
});
const func: IHttpLlmFunction<"chatgpt"> = application.functions[0];

// Validate LLM-generated arguments
const result: IValidation<unknown> = func.validate(llmArguments);

if (result.success === false) {
  // Send detailed error feedback to LLM
  return await retryWithFeedback({
    message: "Type errors detected. Please correct the arguments.",
    errors: result.errors, // Detailed error information
  });
} else {
  // Execute the validated function
  const output: unknown = await HttpLlm.execute({
    connection: { host: "http://localhost:3000" },
    application,
    function: func,
    input: result.data,
  });
  return output;
}

The validation uses typia.validate<T>(), which provides the most accurate validation and extremely detailed error messages compared to other validators:

ComponentstypiaTypeBoxajvio-tszodC.V.
Easy to useβœ…βŒβŒβŒβŒβŒ
Object (simple)βœ”βœ”βœ”βœ”βœ”βœ”
Object (hierarchical)βœ”βœ”βœ”βœ”βœ”βœ”
Object (recursive)βœ”βŒβœ”βœ”βœ”βœ”
Object (union, implicit)βœ…βŒβŒβŒβŒβŒ
Object (union, explicit)βœ”βœ”βœ”βœ”βœ”βŒ
Object (additional tags)βœ”βœ”βœ”βœ”βœ”βœ”
Object (template literal types)βœ”βœ”βœ”βŒβŒβŒ
Object (dynamic properties)βœ”βœ”βœ”βŒβŒβŒ
Array (rest tuple)βœ…βŒβŒβŒβŒβŒ
Array (hierarchical)βœ”βœ”βœ”βœ”βœ”βœ”
Array (recursive)βœ”βœ”βœ”βœ”βœ”βŒ
Array (recursive, union)βœ”βœ”βŒβœ”βœ”βŒ
Array (R+U, implicit)βœ…βŒβŒβŒβŒβŒ
Array (repeated)βœ…βŒβŒβŒβŒβŒ
Array (repeated, union)βœ…βŒβŒβŒβŒβŒ
Ultimate Union Typeβœ…βŒβŒβŒβŒβŒ

C.V. means class-validator

Keywords

swagger

FAQs

Package last updated on 07 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts