
Product
Introducing Webhook Events for Alert Changes
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.
@samchon/openapi
Advanced tools
@samchon/openapiflowchart
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Google"--> gemini("Gemini")
lfc --"Anthropic"--> claude("Claude")
lfc --"<i>Google</i>" --> legacy_gemini("<i> (legacy) Gemini</i>")
legacy_gemini --"3.0" --> custom(["Custom JSON Schema"])
chatgpt --"3.1"--> custom
gemini --"3.1"--> standard(["Standard JSON Schema"])
claude --"3.1"--> standard
end
Transform OpenAPI documents into type-safe LLM function calling applications.
@samchon/openapi converts any version of OpenAPI/Swagger documents into LLM function calling schemas for OpenAI GPT, Claude, and Gemini. It supports every OpenAPI version (Swagger 2.0, OpenAPI 3.0, and OpenAPI 3.1) with full TypeScript type definitions. The library also works with MCP (Model Context Protocol) servers, enabling seamless AI agent development.
Key Features:
Live Demo:
https://github.com/user-attachments/assets/e1faf30b-c703-4451-b68b-2e7a8170bce5
Watch how
@samchon/openapipowers an AI shopping chatbot with@agentica
npm install @samchon/openapi
Transform your OpenAPI document into an LLM function calling application in just a few lines:
import { HttpLlm, OpenApi } from "@samchon/openapi";
// Load and convert your OpenAPI document
const document: OpenApi.IDocument = OpenApi.convert(swagger);
// Generate LLM function calling schemas
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt", // "chatgpt" | "claude" | "gemini"
document,
});
// Find a function by path and method
const func: IHttpLlmFunction<"chatgpt"> | undefined = application.functions.find(
(f) => f.path === "/bbs/articles" && f.method === "post"
);
// Execute the function with LLM-composed arguments
const result: unknown = await HttpLlm.execute({
connection: { host: "http://localhost:3000" },
application,
function: func,
arguments: llmGeneratedArgs, // from OpenAI/Claude/Gemini
});
That's it! Your HTTP backend is now callable by AI.
@samchon/openapi provides complete TypeScript definitions for all OpenAPI versions and introduces an "emended" OpenAPI v3.1 specification that serves as a universal intermediate format.
flowchart
v20(Swagger v2.0) --upgrades--> emended[["<b><u>OpenAPI v3.1 (emended)</u></b>"]]
v30(OpenAPI v3.0) --upgrades--> emended
v31(OpenAPI v3.1) --emends--> emended
emended --downgrades--> v20d(Swagger v2.0)
emended --downgrades--> v30d(Swagger v3.0)
Supported Specifications:
The emended specification removes ambiguities and duplications from OpenAPI v3.1, creating a cleaner, more consistent format. All conversions flow through this intermediate format.
Key Improvements:
anyOf, oneOf, allOf patterns into simpler structuresimport { OpenApi } from "@samchon/openapi";
// Convert any version to emended format
const emended: OpenApi.IDocument = OpenApi.convert(swagger); // Swagger 2.0/3.0/3.1
// Downgrade to older versions if needed
const v30: OpenApiV3.IDocument = OpenApi.downgrade(emended, "3.0");
const v20: SwaggerV2.IDocument = OpenApi.downgrade(emended, "2.0");
Use typia for runtime validation with detailed type checking - far more accurate than other validators:
import { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from "@samchon/openapi";
import typia from "typia";
const document: any = await fetch("swagger.json").then(r => r.json());
// Validate with detailed error messages
const result: typia.IValidation<SwaggerV2.IDocument | OpenApiV3.IDocument | OpenApiV3_1.IDocument> =
typia.validate<SwaggerV2.IDocument | OpenApiV3.IDocument | OpenApiV3_1.IDocument>(document);
if (result.success) {
const emended: OpenApi.IDocument = OpenApi.convert(result.data);
} else {
console.error(result.errors); // Detailed validation errors
}
Try it in the playground: Type assertion | Detailed validation
flowchart
subgraph "OpenAPI Specification"
v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]]
v30("OpenAPI v3.0") --upgrades--> emended
v31("OpenAPI v3.1") --emends--> emended
end
subgraph "OpenAPI Generator"
emended --normalizes--> migration[["Migration Schema"]]
migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}}
lfc --"OpenAI"--> chatgpt("ChatGPT")
lfc --"Google"--> gemini("Gemini")
lfc --"Anthropic"--> claude("Claude")
lfc --"<i>Google</i>" --> legacy_gemini("<i> (legacy) Gemini</i>")
legacy_gemini --"3.0" --> custom(["Custom JSON Schema"])
chatgpt --"3.1"--> custom
gemini --"3.1"--> standard(["Standard JSON Schema"])
claude --"3.1"--> standard
end
Turn your HTTP backend into an AI-callable service. @samchon/openapi converts your OpenAPI document into function schemas that OpenAI, Claude, and Gemini can understand and call.
IChatGptSchema - For OpenAI GPT
description to bypass OpenAI's schema limitationsIClaudeSchema - For Anthropic Claude β Recommended
IGeminiSchema - For Google Gemini
[!NOTE]
You can also compose
ILlmApplicationfrom a TypeScript class usingtypia.https://typia.io/docs/llm/application
import { ILlmApplication } from "@samchon/openapi"; import typia from "typia"; const app: ILlmApplication<"chatgpt"> = typia.llm.application<YourClassType, "chatgpt">();
Here's a full example showing how OpenAI GPT selects a function, fills arguments, and you execute it:
Resources:
import { HttpLlm, OpenApi, IHttpLlmApplication, IHttpLlmFunction } from "@samchon/openapi";
import OpenAI from "openai";
// 1. Convert OpenAPI to LLM function calling application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> =
HttpLlm.application({
model: "chatgpt",
document,
});
// 2. Find the function by path and method
const func: IHttpLlmFunction<"chatgpt"> | undefined = application.functions.find(
(f) => f.path === "/shoppings/sellers/sale" && f.method === "post"
);
if (!func) throw new Error("Function not found");
// 3. Let OpenAI GPT call the function
const client: OpenAI = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion: OpenAI.ChatCompletion = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful shopping assistant." },
{ role: "user", content: "I want to sell Microsoft Surface Pro 9..." }
],
tools: [{
type: "function",
function: {
name: func.name,
description: func.description,
parameters: func.parameters,
}
}],
});
// 4. Execute the function call on your actual server
const toolCall: OpenAI.ChatCompletionMessageToolCall =
completion.choices[0].message.tool_calls![0];
const result: unknown = await HttpLlm.execute({
connection: { host: "http://localhost:37001" },
application,
function: func,
input: JSON.parse(toolCall.function.arguments),
});
The Problem: LLMs make type errors. A lot.
Even when your schema says Array<string>, GPT might return just "string". In real-world testing with OpenAI GPT-4o-mini on a shopping service:
The Solution: Validate LLM output and send errors back for correction.
import { HttpLlm, OpenApi, IHttpLlmApplication, IHttpLlmFunction, IValidation } from "@samchon/openapi";
// Setup application
const document: OpenApi.IDocument = OpenApi.convert(swagger);
const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({
model: "chatgpt",
document,
});
const func: IHttpLlmFunction<"chatgpt"> = application.functions[0];
// Validate LLM-generated arguments
const result: IValidation<unknown> = func.validate(llmArguments);
if (result.success === false) {
// Send detailed error feedback to LLM
return await retryWithFeedback({
message: "Type errors detected. Please correct the arguments.",
errors: result.errors, // Detailed error information
});
} else {
// Execute the validated function
const output: unknown = await HttpLlm.execute({
connection: { host: "http://localhost:3000" },
application,
function: func,
input: result.data,
});
return output;
}
The validation uses typia.validate<T>(), which provides the most accurate validation and extremely detailed error messages compared to other validators:
| Components | typia | TypeBox | ajv | io-ts | zod | C.V. |
|---|---|---|---|---|---|---|
| Easy to use | β | β | β | β | β | β |
| Object (simple) | β | β | β | β | β | β |
| Object (hierarchical) | β | β | β | β | β | β |
| Object (recursive) | β | β | β | β | β | β |
| Object (union, implicit) | β | β | β | β | β | β |
| Object (union, explicit) | β | β | β | β | β | β |
| Object (additional tags) | β | β | β | β | β | β |
| Object (template literal types) | β | β | β | β | β | β |
| Object (dynamic properties) | β | β | β | β | β | β |
| Array (rest tuple) | β | β | β | β | β | β |
| Array (hierarchical) | β | β | β | β | β | β |
| Array (recursive) | β | β | β | β | β | β |
| Array (recursive, union) | β | β | β | β | β | β |
| Array (R+U, implicit) | β | β | β | β | β | β |
| Array (repeated) | β | β | β | β | β | β |
| Array (repeated, union) | β | β | β | β | β | β |
| Ultimate Union Type | β | β | β | β | β | β |
C.V.meansclass-validator
FAQs
OpenAPI definitions and converters for 'typia' and 'nestia'.
The npm package @samchon/openapi receives a total of 151,113 weekly downloads. As such, @samchon/openapi popularity was classified as popular.
We found that @samchon/openapi demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Product
Add real-time Socket webhook events to your workflows to automatically receive software supply chain alert changes in real time.

Security News
ENISA has become a CVE Program Root, giving the EU a central authority for coordinating vulnerability reporting, disclosure, and cross-border response.

Product
Socket now scans OpenVSX extensions, giving teams early detection of risky behaviors, hidden capabilities, and supply chain threats in developer tools.