Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@langchain/openai

Package Overview
Dependencies
Maintainers
10
Versions
70
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@langchain/openai - npm Package Compare versions

Comparing version 0.2.6 to 0.2.7

dist/utils/tools.cjs

412

dist/azure/chat_models.d.ts

@@ -5,2 +5,414 @@ import { type ClientOptions } from "openai";

import { AzureOpenAIInput, LegacyOpenAIInput, OpenAIChatInput, OpenAICoreRequestOptions } from "../types.js";
/**
* Azure OpenAI chat model integration.
*
* Setup:
* Install `@langchain/openai` and set the following environment variables:
*
* ```bash
* npm install @langchain/openai
* export AZURE_OPENAI_API_KEY="your-api-key"
* export AZURE_OPENAI_API_DEPLOYMENT_NAME="your-deployment-name"
* export AZURE_OPENAI_API_VERSION="your-version"
* export AZURE_OPENAI_BASE_PATH="your-base-path"
* ```
*
* ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html#constructor)
*
* ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html)
*
* Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
* They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below:
*
* ```typescript
* // When calling `.bind`, call options should be passed via the first argument
* const llmWithArgsBound = llm.bind({
* stop: ["\n"],
* tools: [...],
* });
*
* // When calling `.bindTools`, call options should be passed via the second argument
* const llmWithTools = llm.bindTools(
* [...],
* {
* tool_choice: "auto",
* }
* );
* ```
*
* ## Examples
*
* <details open>
* <summary><strong>Instantiate</strong></summary>
*
* ```typescript
* import { AzureChatOpenAI } from '@langchain/openai';
*
* const llm = new AzureChatOpenAI({
* azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
* azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
* azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
* azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
* temperature: 0,
* maxTokens: undefined,
* timeout: undefined,
* maxRetries: 2,
* // apiKey: "...",
* // baseUrl: "...",
* // other params...
* });
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Invoking</strong></summary>
*
* ```typescript
* const input = `Translate "I love programming" into French.`;
*
* // Models also accept a list of chat messages or a formatted prompt
* const result = await llm.invoke(input);
* console.log(result);
* ```
*
* ```txt
* AIMessage {
* "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "tokenUsage": {
* "completionTokens": 5,
* "promptTokens": 28,
* "totalTokens": 33
* },
* "finish_reason": "stop",
* "system_fingerprint": "fp_3aa7262c27"
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Streaming Chunks</strong></summary>
*
* ```typescript
* for await (const chunk of await llm.stream(input)) {
* console.log(chunk);
* }
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs",
* "content": ""
* }
* AIMessageChunk {
* "content": "J"
* }
* AIMessageChunk {
* "content": "'adore"
* }
* AIMessageChunk {
* "content": " la"
* }
* AIMessageChunk {
* "content": " programmation",,
* }
* AIMessageChunk {
* "content": ".",,
* }
* AIMessageChunk {
* "content": "",
* "response_metadata": {
* "finish_reason": "stop",
* "system_fingerprint": "fp_c9aa9c0491"
* },
* }
* AIMessageChunk {
* "content": "",
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Aggregate Streamed Chunks</strong></summary>
*
* ```typescript
* import { AIMessageChunk } from '@langchain/core/messages';
* import { concat } from '@langchain/core/utils/stream';
*
* const stream = await llm.stream(input);
* let full: AIMessageChunk | undefined;
* for await (const chunk of stream) {
* full = !full ? chunk : concat(full, chunk);
* }
* console.log(full);
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "prompt": 0,
* "completion": 0,
* "finish_reason": "stop",
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Bind tools</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const GetWeather = {
* name: "GetWeather",
* description: "Get the current weather in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const GetPopulation = {
* name: "GetPopulation",
* description: "Get the current population in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const llmWithTools = llm.bindTools([GetWeather, GetPopulation]);
* const aiMsg = await llmWithTools.invoke(
* "Which city is hotter today and which is bigger: LA or NY?"
* );
* console.log(aiMsg.tool_calls);
* ```
*
* ```txt
* [
* {
* name: 'GetWeather',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK'
* },
* {
* name: 'GetWeather',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX'
* },
* {
* name: 'GetPopulation',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_kL3OXxaq9OjIKqRTpvjaCH14'
* },
* {
* name: 'GetPopulation',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_s9KQB1UWj45LLGaEnjz0179q'
* }
* ]
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Structured Output</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const Joke = z.object({
* setup: z.string().describe("The setup of the joke"),
* punchline: z.string().describe("The punchline to the joke"),
* rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
* }).describe('Joke to tell user.');
*
* const structuredLlm = llm.withStructuredOutput(Joke);
* const jokeResult = await structuredLlm.invoke("Tell me a joke about cats", { name: "Joke" });
* console.log(jokeResult);
* ```
*
* ```txt
* {
* setup: 'Why was the cat sitting on the computer?',
* punchline: 'Because it wanted to keep an eye on the mouse!',
* rating: 7
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>JSON Object Response Format</strong></summary>
*
* ```typescript
* const jsonLlm = llm.bind({ response_format: { type: "json_object" } });
* const jsonLlmAiMsg = await jsonLlm.invoke(
* "Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]"
* );
* console.log(jsonLlmAiMsg.content);
* ```
*
* ```txt
* {
* "randomInts": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Multimodal</strong></summary>
*
* ```typescript
* import { HumanMessage } from '@langchain/core/messages';
*
* const imageUrl = "https://example.com/image.jpg";
* const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
* const base64Image = Buffer.from(imageData).toString('base64');
*
* const message = new HumanMessage({
* content: [
* { type: "text", text: "describe the weather in this image" },
* {
* type: "image_url",
* image_url: { url: `data:image/jpeg;base64,${base64Image}` },
* },
* ]
* });
*
* const imageDescriptionAiMsg = await llm.invoke([message]);
* console.log(imageDescriptionAiMsg.content);
* ```
*
* ```txt
* The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Usage Metadata</strong></summary>
*
* ```typescript
* const aiMsgForMetadata = await llm.invoke(input);
* console.log(aiMsgForMetadata.usage_metadata);
* ```
*
* ```txt
* { input_tokens: 28, output_tokens: 5, total_tokens: 33 }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Logprobs</strong></summary>
*
* ```typescript
* const logprobsLlm = new ChatOpenAI({ logprobs: true });
* const aiMsgForLogprobs = await logprobsLlm.invoke(input);
* console.log(aiMsgForLogprobs.response_metadata.logprobs);
* ```
*
* ```txt
* {
* content: [
* {
* token: 'J',
* logprob: -0.000050616763,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: "'",
* logprob: -0.01868736,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: 'ad',
* logprob: -0.0000030545007,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ore', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: ' la',
* logprob: -0.515404,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: ' programm',
* logprob: -0.0000118755715,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ation', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: '.',
* logprob: -0.0000037697225,
* bytes: [Array],
* top_logprobs: []
* }
* ],
* refusal: null
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Response Metadata</strong></summary>
*
* ```typescript
* const aiMsgForResponseMetadata = await llm.invoke(input);
* console.log(aiMsgForResponseMetadata.response_metadata);
* ```
*
* ```txt
* {
* tokenUsage: { completionTokens: 5, promptTokens: 28, totalTokens: 33 },
* finish_reason: 'stop',
* system_fingerprint: 'fp_3aa7262c27'
* }
* ```
* </details>
*/
export declare class AzureChatOpenAI extends ChatOpenAI {

@@ -7,0 +419,0 @@ _llmType(): string;

import { AzureOpenAI as AzureOpenAIClient } from "openai";
import { ChatOpenAI } from "../chat_models.js";
import { getEndpoint } from "../utils/azure.js";
/**
* Azure OpenAI chat model integration.
*
* Setup:
* Install `@langchain/openai` and set the following environment variables:
*
* ```bash
* npm install @langchain/openai
* export AZURE_OPENAI_API_KEY="your-api-key"
* export AZURE_OPENAI_API_DEPLOYMENT_NAME="your-deployment-name"
* export AZURE_OPENAI_API_VERSION="your-version"
* export AZURE_OPENAI_BASE_PATH="your-base-path"
* ```
*
* ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html#constructor)
*
* ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html)
*
* Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
* They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below:
*
* ```typescript
* // When calling `.bind`, call options should be passed via the first argument
* const llmWithArgsBound = llm.bind({
* stop: ["\n"],
* tools: [...],
* });
*
* // When calling `.bindTools`, call options should be passed via the second argument
* const llmWithTools = llm.bindTools(
* [...],
* {
* tool_choice: "auto",
* }
* );
* ```
*
* ## Examples
*
* <details open>
* <summary><strong>Instantiate</strong></summary>
*
* ```typescript
* import { AzureChatOpenAI } from '@langchain/openai';
*
* const llm = new AzureChatOpenAI({
* azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
* azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
* azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
* azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
* temperature: 0,
* maxTokens: undefined,
* timeout: undefined,
* maxRetries: 2,
* // apiKey: "...",
* // baseUrl: "...",
* // other params...
* });
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Invoking</strong></summary>
*
* ```typescript
* const input = `Translate "I love programming" into French.`;
*
* // Models also accept a list of chat messages or a formatted prompt
* const result = await llm.invoke(input);
* console.log(result);
* ```
*
* ```txt
* AIMessage {
* "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "tokenUsage": {
* "completionTokens": 5,
* "promptTokens": 28,
* "totalTokens": 33
* },
* "finish_reason": "stop",
* "system_fingerprint": "fp_3aa7262c27"
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Streaming Chunks</strong></summary>
*
* ```typescript
* for await (const chunk of await llm.stream(input)) {
* console.log(chunk);
* }
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs",
* "content": ""
* }
* AIMessageChunk {
* "content": "J"
* }
* AIMessageChunk {
* "content": "'adore"
* }
* AIMessageChunk {
* "content": " la"
* }
* AIMessageChunk {
* "content": " programmation",,
* }
* AIMessageChunk {
* "content": ".",,
* }
* AIMessageChunk {
* "content": "",
* "response_metadata": {
* "finish_reason": "stop",
* "system_fingerprint": "fp_c9aa9c0491"
* },
* }
* AIMessageChunk {
* "content": "",
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Aggregate Streamed Chunks</strong></summary>
*
* ```typescript
* import { AIMessageChunk } from '@langchain/core/messages';
* import { concat } from '@langchain/core/utils/stream';
*
* const stream = await llm.stream(input);
* let full: AIMessageChunk | undefined;
* for await (const chunk of stream) {
* full = !full ? chunk : concat(full, chunk);
* }
* console.log(full);
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "prompt": 0,
* "completion": 0,
* "finish_reason": "stop",
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Bind tools</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const GetWeather = {
* name: "GetWeather",
* description: "Get the current weather in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const GetPopulation = {
* name: "GetPopulation",
* description: "Get the current population in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const llmWithTools = llm.bindTools([GetWeather, GetPopulation]);
* const aiMsg = await llmWithTools.invoke(
* "Which city is hotter today and which is bigger: LA or NY?"
* );
* console.log(aiMsg.tool_calls);
* ```
*
* ```txt
* [
* {
* name: 'GetWeather',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK'
* },
* {
* name: 'GetWeather',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX'
* },
* {
* name: 'GetPopulation',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_kL3OXxaq9OjIKqRTpvjaCH14'
* },
* {
* name: 'GetPopulation',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_s9KQB1UWj45LLGaEnjz0179q'
* }
* ]
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Structured Output</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const Joke = z.object({
* setup: z.string().describe("The setup of the joke"),
* punchline: z.string().describe("The punchline to the joke"),
* rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
* }).describe('Joke to tell user.');
*
* const structuredLlm = llm.withStructuredOutput(Joke);
* const jokeResult = await structuredLlm.invoke("Tell me a joke about cats", { name: "Joke" });
* console.log(jokeResult);
* ```
*
* ```txt
* {
* setup: 'Why was the cat sitting on the computer?',
* punchline: 'Because it wanted to keep an eye on the mouse!',
* rating: 7
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>JSON Object Response Format</strong></summary>
*
* ```typescript
* const jsonLlm = llm.bind({ response_format: { type: "json_object" } });
* const jsonLlmAiMsg = await jsonLlm.invoke(
* "Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]"
* );
* console.log(jsonLlmAiMsg.content);
* ```
*
* ```txt
* {
* "randomInts": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Multimodal</strong></summary>
*
* ```typescript
* import { HumanMessage } from '@langchain/core/messages';
*
* const imageUrl = "https://example.com/image.jpg";
* const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
* const base64Image = Buffer.from(imageData).toString('base64');
*
* const message = new HumanMessage({
* content: [
* { type: "text", text: "describe the weather in this image" },
* {
* type: "image_url",
* image_url: { url: `data:image/jpeg;base64,${base64Image}` },
* },
* ]
* });
*
* const imageDescriptionAiMsg = await llm.invoke([message]);
* console.log(imageDescriptionAiMsg.content);
* ```
*
* ```txt
* The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Usage Metadata</strong></summary>
*
* ```typescript
* const aiMsgForMetadata = await llm.invoke(input);
* console.log(aiMsgForMetadata.usage_metadata);
* ```
*
* ```txt
* { input_tokens: 28, output_tokens: 5, total_tokens: 33 }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Logprobs</strong></summary>
*
* ```typescript
* const logprobsLlm = new ChatOpenAI({ logprobs: true });
* const aiMsgForLogprobs = await logprobsLlm.invoke(input);
* console.log(aiMsgForLogprobs.response_metadata.logprobs);
* ```
*
* ```txt
* {
* content: [
* {
* token: 'J',
* logprob: -0.000050616763,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: "'",
* logprob: -0.01868736,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: 'ad',
* logprob: -0.0000030545007,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ore', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: ' la',
* logprob: -0.515404,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: ' programm',
* logprob: -0.0000118755715,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ation', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: '.',
* logprob: -0.0000037697225,
* bytes: [Array],
* top_logprobs: []
* }
* ],
* refusal: null
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Response Metadata</strong></summary>
*
* ```typescript
* const aiMsgForResponseMetadata = await llm.invoke(input);
* console.log(aiMsgForResponseMetadata.response_metadata);
* ```
*
* ```txt
* {
* tokenUsage: { completionTokens: 5, promptTokens: 28, totalTokens: 33 },
* finish_reason: 'stop',
* system_fingerprint: 'fp_3aa7262c27'
* }
* ```
* </details>
*/
export class AzureChatOpenAI extends ChatOpenAI {

@@ -5,0 +417,0 @@ _llmType() {

440

dist/chat_models.d.ts

@@ -9,3 +9,4 @@ import { type ClientOptions, OpenAI as OpenAIClient } from "openai";

import { Runnable } from "@langchain/core/runnables";
import type { AzureOpenAIInput, OpenAICallOptions, OpenAIChatInput, OpenAICoreRequestOptions, LegacyOpenAIInput } from "./types.js";
import { ParsedChatCompletion } from "openai/resources/beta/chat/completions.mjs";
import type { AzureOpenAIInput, OpenAICallOptions, OpenAIChatInput, OpenAICoreRequestOptions, LegacyOpenAIInput, ChatOpenAIResponseFormat } from "./types.js";
import { OpenAIToolChoice } from "./utils/openai.js";

@@ -45,5 +46,3 @@ export type { AzureOpenAIInput, OpenAICallOptions, OpenAIChatInput };

promptIndex?: number;
response_format?: {
type: "json_object";
};
response_format?: ChatOpenAIResponseFormat;
seed?: number;

@@ -86,33 +85,414 @@ /**

/**
* Wrapper around OpenAI large language models that use the Chat endpoint.
* OpenAI chat model integration.
*
* To use you should have the `OPENAI_API_KEY` environment variable set.
* Setup:
* Install `@langchain/openai` and set an environment variable named `OPENAI_API_KEY`.
*
* To use with Azure you should have the:
* `AZURE_OPENAI_API_KEY`,
* `AZURE_OPENAI_API_INSTANCE_NAME`,
* `AZURE_OPENAI_API_DEPLOYMENT_NAME`
* and `AZURE_OPENAI_API_VERSION` environment variables set.
* `AZURE_OPENAI_BASE_PATH` is optional and will override `AZURE_OPENAI_API_INSTANCE_NAME` if you need to use a custom endpoint.
* ```bash
* npm install @langchain/openai
* export OPENAI_API_KEY="your-api-key"
* ```
*
* @remarks
* Any parameters that are valid to be passed to {@link
* https://platform.openai.com/docs/api-reference/chat/create |
* `openai.createChatCompletion`} can be passed through {@link modelKwargs}, even
* if not explicitly available on this class.
* @example
* ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html#constructor)
*
* ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html)
*
* Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
* They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below:
*
* ```typescript
* // Create a new instance of ChatOpenAI with specific temperature and model name settings
* const model = new ChatOpenAI({
* temperature: 0.9,
* model: "ft:gpt-3.5-turbo-0613:{ORG_NAME}::{MODEL_ID}",
* // When calling `.bind`, call options should be passed via the first argument
* const llmWithArgsBound = llm.bind({
* stop: ["\n"],
* tools: [...],
* });
*
* // Invoke the model with a message and await the response
* const message = await model.invoke("Hi there!");
* // When calling `.bindTools`, call options should be passed via the second argument
* const llmWithTools = llm.bindTools(
* [...],
* {
* tool_choice: "auto",
* }
* );
* ```
*
* // Log the response to the console
* console.log(message);
* ## Examples
*
* <details open>
* <summary><strong>Instantiate</strong></summary>
*
* ```typescript
* import { ChatOpenAI } from '@langchain/openai';
*
* const llm = new ChatOpenAI({
* model: "gpt-4o",
* temperature: 0,
* maxTokens: undefined,
* timeout: undefined,
* maxRetries: 2,
* // apiKey: "...",
* // baseUrl: "...",
* // organization: "...",
* // other params...
* });
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Invoking</strong></summary>
*
* ```typescript
* const input = `Translate "I love programming" into French.`;
*
* // Models also accept a list of chat messages or a formatted prompt
* const result = await llm.invoke(input);
* console.log(result);
* ```
*
* ```txt
* AIMessage {
* "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "tokenUsage": {
* "completionTokens": 5,
* "promptTokens": 28,
* "totalTokens": 33
* },
* "finish_reason": "stop",
* "system_fingerprint": "fp_3aa7262c27"
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Streaming Chunks</strong></summary>
*
* ```typescript
* for await (const chunk of await llm.stream(input)) {
* console.log(chunk);
* }
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs",
* "content": ""
* }
* AIMessageChunk {
* "content": "J"
* }
* AIMessageChunk {
* "content": "'adore"
* }
* AIMessageChunk {
* "content": " la"
* }
* AIMessageChunk {
* "content": " programmation",,
* }
* AIMessageChunk {
* "content": ".",,
* }
* AIMessageChunk {
* "content": "",
* "response_metadata": {
* "finish_reason": "stop",
* "system_fingerprint": "fp_c9aa9c0491"
* },
* }
* AIMessageChunk {
* "content": "",
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Aggregate Streamed Chunks</strong></summary>
*
* ```typescript
* import { AIMessageChunk } from '@langchain/core/messages';
* import { concat } from '@langchain/core/utils/stream';
*
* const stream = await llm.stream(input);
* let full: AIMessageChunk | undefined;
* for await (const chunk of stream) {
* full = !full ? chunk : concat(full, chunk);
* }
* console.log(full);
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "prompt": 0,
* "completion": 0,
* "finish_reason": "stop",
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Bind tools</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const GetWeather = {
* name: "GetWeather",
* description: "Get the current weather in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const GetPopulation = {
* name: "GetPopulation",
* description: "Get the current population in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const llmWithTools = llm.bindTools(
* [GetWeather, GetPopulation],
* {
* // strict: true // enforce tool args schema is respected
* }
* );
* const aiMsg = await llmWithTools.invoke(
* "Which city is hotter today and which is bigger: LA or NY?"
* );
* console.log(aiMsg.tool_calls);
* ```
*
* ```txt
* [
* {
* name: 'GetWeather',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK'
* },
* {
* name: 'GetWeather',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX'
* },
* {
* name: 'GetPopulation',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_kL3OXxaq9OjIKqRTpvjaCH14'
* },
* {
* name: 'GetPopulation',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_s9KQB1UWj45LLGaEnjz0179q'
* }
* ]
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Structured Output</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const Joke = z.object({
* setup: z.string().describe("The setup of the joke"),
* punchline: z.string().describe("The punchline to the joke"),
* rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
* }).describe('Joke to tell user.');
*
* const structuredLlm = llm.withStructuredOutput(Joke);
* const jokeResult = await structuredLlm.invoke("Tell me a joke about cats", { name: "Joke" });
* console.log(jokeResult);
* ```
*
* ```txt
* {
* setup: 'Why was the cat sitting on the computer?',
* punchline: 'Because it wanted to keep an eye on the mouse!',
* rating: 7
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>JSON Object Response Format</strong></summary>
*
* ```typescript
* const jsonLlm = llm.bind({ response_format: { type: "json_object" } });
* const jsonLlmAiMsg = await jsonLlm.invoke(
* "Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]"
* );
* console.log(jsonLlmAiMsg.content);
* ```
*
* ```txt
* {
* "randomInts": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Multimodal</strong></summary>
*
* ```typescript
* import { HumanMessage } from '@langchain/core/messages';
*
* const imageUrl = "https://example.com/image.jpg";
* const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
* const base64Image = Buffer.from(imageData).toString('base64');
*
* const message = new HumanMessage({
* content: [
* { type: "text", text: "describe the weather in this image" },
* {
* type: "image_url",
* image_url: { url: `data:image/jpeg;base64,${base64Image}` },
* },
* ]
* });
*
* const imageDescriptionAiMsg = await llm.invoke([message]);
* console.log(imageDescriptionAiMsg.content);
* ```
*
* ```txt
* The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Usage Metadata</strong></summary>
*
* ```typescript
* const aiMsgForMetadata = await llm.invoke(input);
* console.log(aiMsgForMetadata.usage_metadata);
* ```
*
* ```txt
* { input_tokens: 28, output_tokens: 5, total_tokens: 33 }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Logprobs</strong></summary>
*
* ```typescript
* const logprobsLlm = new ChatOpenAI({ logprobs: true });
* const aiMsgForLogprobs = await logprobsLlm.invoke(input);
* console.log(aiMsgForLogprobs.response_metadata.logprobs);
* ```
*
* ```txt
* {
* content: [
* {
* token: 'J',
* logprob: -0.000050616763,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: "'",
* logprob: -0.01868736,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: 'ad',
* logprob: -0.0000030545007,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ore', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: ' la',
* logprob: -0.515404,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: ' programm',
* logprob: -0.0000118755715,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ation', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: '.',
* logprob: -0.0000037697225,
* bytes: [Array],
* top_logprobs: []
* }
* ],
* refusal: null
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Response Metadata</strong></summary>
*
* ```typescript
* const aiMsgForResponseMetadata = await llm.invoke(input);
* console.log(aiMsgForResponseMetadata.response_metadata);
* ```
*
* ```txt
* {
* tokenUsage: { completionTokens: 5, promptTokens: 28, totalTokens: 33 },
* finish_reason: 'stop',
* system_fingerprint: 'fp_3aa7262c27'
* }
* ```
* </details>
*
* <br />
*/

@@ -167,2 +547,3 @@ export declare class ChatOpenAI<CallOptions extends ChatOpenAICallOptions = ChatOpenAICallOptions> extends BaseChatModel<CallOptions, AIMessageChunk> implements OpenAIChatInput, AzureOpenAIInput {

bindTools(tools: ChatOpenAIToolType[], kwargs?: Partial<CallOptions>): Runnable<BaseLanguageModelInput, AIMessageChunk, CallOptions>;
private createResponseFormat;
/**

@@ -209,2 +590,9 @@ * Get the parameters used to invoke the model

completionWithRetry(request: OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming, options?: OpenAICoreRequestOptions): Promise<OpenAIClient.Chat.Completions.ChatCompletion>;
/**
* Call the beta chat completions parse endpoint. This should only be called if
* response_format is set to "json_object".
* @param {OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming} request
* @param {OpenAICoreRequestOptions | undefined} options
*/
betaParsedCompletionWithRetry(request: OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming, options?: OpenAICoreRequestOptions): Promise<ParsedChatCompletion<null>>;
protected _getClientOptions(options: OpenAICoreRequestOptions | undefined): OpenAICoreRequestOptions;

@@ -211,0 +599,0 @@ _llmType(): string;

@@ -7,3 +7,2 @@ import { OpenAI as OpenAIClient } from "openai";

import { isOpenAITool, } from "@langchain/core/language_models/base";
import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
import { RunnablePassthrough, RunnableSequence, } from "@langchain/core/runnables";

@@ -13,5 +12,7 @@ import { JsonOutputParser, StructuredOutputParser, } from "@langchain/core/output_parsers";

import { zodToJsonSchema } from "zod-to-json-schema";
import { zodResponseFormat } from "openai/helpers/zod";
import { getEndpoint } from "./utils/azure.js";
import { formatToOpenAIToolChoice, wrapOpenAIClientError, } from "./utils/openai.js";
import { formatFunctionDefinitions, } from "./utils/openai-format-fndef.js";
import { _convertToOpenAITool } from "./utils/tools.js";
function extractGenericMessageCustomRole(message) {

@@ -199,36 +200,417 @@ if (message.role !== "system" &&

}
return convertToOpenAITool(tool, fields);
return _convertToOpenAITool(tool, fields);
}
/**
* Wrapper around OpenAI large language models that use the Chat endpoint.
* OpenAI chat model integration.
*
* To use you should have the `OPENAI_API_KEY` environment variable set.
* Setup:
* Install `@langchain/openai` and set an environment variable named `OPENAI_API_KEY`.
*
* To use with Azure you should have the:
* `AZURE_OPENAI_API_KEY`,
* `AZURE_OPENAI_API_INSTANCE_NAME`,
* `AZURE_OPENAI_API_DEPLOYMENT_NAME`
* and `AZURE_OPENAI_API_VERSION` environment variables set.
* `AZURE_OPENAI_BASE_PATH` is optional and will override `AZURE_OPENAI_API_INSTANCE_NAME` if you need to use a custom endpoint.
* ```bash
* npm install @langchain/openai
* export OPENAI_API_KEY="your-api-key"
* ```
*
* @remarks
* Any parameters that are valid to be passed to {@link
* https://platform.openai.com/docs/api-reference/chat/create |
* `openai.createChatCompletion`} can be passed through {@link modelKwargs}, even
* if not explicitly available on this class.
* @example
* ## [Constructor args](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html#constructor)
*
* ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_openai.ChatOpenAICallOptions.html)
*
* Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
* They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below:
*
* ```typescript
* // Create a new instance of ChatOpenAI with specific temperature and model name settings
* const model = new ChatOpenAI({
* temperature: 0.9,
* model: "ft:gpt-3.5-turbo-0613:{ORG_NAME}::{MODEL_ID}",
* // When calling `.bind`, call options should be passed via the first argument
* const llmWithArgsBound = llm.bind({
* stop: ["\n"],
* tools: [...],
* });
*
* // Invoke the model with a message and await the response
* const message = await model.invoke("Hi there!");
* // When calling `.bindTools`, call options should be passed via the second argument
* const llmWithTools = llm.bindTools(
* [...],
* {
* tool_choice: "auto",
* }
* );
* ```
*
* // Log the response to the console
* console.log(message);
* ## Examples
*
* <details open>
* <summary><strong>Instantiate</strong></summary>
*
* ```typescript
* import { ChatOpenAI } from '@langchain/openai';
*
* const llm = new ChatOpenAI({
* model: "gpt-4o",
* temperature: 0,
* maxTokens: undefined,
* timeout: undefined,
* maxRetries: 2,
* // apiKey: "...",
* // baseUrl: "...",
* // organization: "...",
* // other params...
* });
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Invoking</strong></summary>
*
* ```typescript
* const input = `Translate "I love programming" into French.`;
*
* // Models also accept a list of chat messages or a formatted prompt
* const result = await llm.invoke(input);
* console.log(result);
* ```
*
* ```txt
* AIMessage {
* "id": "chatcmpl-9u4Mpu44CbPjwYFkTbeoZgvzB00Tz",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "tokenUsage": {
* "completionTokens": 5,
* "promptTokens": 28,
* "totalTokens": 33
* },
* "finish_reason": "stop",
* "system_fingerprint": "fp_3aa7262c27"
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Streaming Chunks</strong></summary>
*
* ```typescript
* for await (const chunk of await llm.stream(input)) {
* console.log(chunk);
* }
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4NWB7yUeHCKdLr6jP3HpaOYHTqs",
* "content": ""
* }
* AIMessageChunk {
* "content": "J"
* }
* AIMessageChunk {
* "content": "'adore"
* }
* AIMessageChunk {
* "content": " la"
* }
* AIMessageChunk {
* "content": " programmation",,
* }
* AIMessageChunk {
* "content": ".",,
* }
* AIMessageChunk {
* "content": "",
* "response_metadata": {
* "finish_reason": "stop",
* "system_fingerprint": "fp_c9aa9c0491"
* },
* }
* AIMessageChunk {
* "content": "",
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Aggregate Streamed Chunks</strong></summary>
*
* ```typescript
* import { AIMessageChunk } from '@langchain/core/messages';
* import { concat } from '@langchain/core/utils/stream';
*
* const stream = await llm.stream(input);
* let full: AIMessageChunk | undefined;
* for await (const chunk of stream) {
* full = !full ? chunk : concat(full, chunk);
* }
* console.log(full);
* ```
*
* ```txt
* AIMessageChunk {
* "id": "chatcmpl-9u4PnX6Fy7OmK46DASy0bH6cxn5Xu",
* "content": "J'adore la programmation.",
* "response_metadata": {
* "prompt": 0,
* "completion": 0,
* "finish_reason": "stop",
* },
* "usage_metadata": {
* "input_tokens": 28,
* "output_tokens": 5,
* "total_tokens": 33
* }
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Bind tools</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const GetWeather = {
* name: "GetWeather",
* description: "Get the current weather in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const GetPopulation = {
* name: "GetPopulation",
* description: "Get the current population in a given location",
* schema: z.object({
* location: z.string().describe("The city and state, e.g. San Francisco, CA")
* }),
* }
*
* const llmWithTools = llm.bindTools(
* [GetWeather, GetPopulation],
* {
* // strict: true // enforce tool args schema is respected
* }
* );
* const aiMsg = await llmWithTools.invoke(
* "Which city is hotter today and which is bigger: LA or NY?"
* );
* console.log(aiMsg.tool_calls);
* ```
*
* ```txt
* [
* {
* name: 'GetWeather',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_uPU4FiFzoKAtMxfmPnfQL6UK'
* },
* {
* name: 'GetWeather',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_UNkEwuQsHrGYqgDQuH9nPAtX'
* },
* {
* name: 'GetPopulation',
* args: { location: 'Los Angeles, CA' },
* type: 'tool_call',
* id: 'call_kL3OXxaq9OjIKqRTpvjaCH14'
* },
* {
* name: 'GetPopulation',
* args: { location: 'New York, NY' },
* type: 'tool_call',
* id: 'call_s9KQB1UWj45LLGaEnjz0179q'
* }
* ]
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Structured Output</strong></summary>
*
* ```typescript
* import { z } from 'zod';
*
* const Joke = z.object({
* setup: z.string().describe("The setup of the joke"),
* punchline: z.string().describe("The punchline to the joke"),
* rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
* }).describe('Joke to tell user.');
*
* const structuredLlm = llm.withStructuredOutput(Joke);
* const jokeResult = await structuredLlm.invoke("Tell me a joke about cats", { name: "Joke" });
* console.log(jokeResult);
* ```
*
* ```txt
* {
* setup: 'Why was the cat sitting on the computer?',
* punchline: 'Because it wanted to keep an eye on the mouse!',
* rating: 7
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>JSON Object Response Format</strong></summary>
*
* ```typescript
* const jsonLlm = llm.bind({ response_format: { type: "json_object" } });
* const jsonLlmAiMsg = await jsonLlm.invoke(
* "Return a JSON object with key 'randomInts' and a value of 10 random ints in [0-99]"
* );
* console.log(jsonLlmAiMsg.content);
* ```
*
* ```txt
* {
* "randomInts": [23, 87, 45, 12, 78, 34, 56, 90, 11, 67]
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Multimodal</strong></summary>
*
* ```typescript
* import { HumanMessage } from '@langchain/core/messages';
*
* const imageUrl = "https://example.com/image.jpg";
* const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
* const base64Image = Buffer.from(imageData).toString('base64');
*
* const message = new HumanMessage({
* content: [
* { type: "text", text: "describe the weather in this image" },
* {
* type: "image_url",
* image_url: { url: `data:image/jpeg;base64,${base64Image}` },
* },
* ]
* });
*
* const imageDescriptionAiMsg = await llm.invoke([message]);
* console.log(imageDescriptionAiMsg.content);
* ```
*
* ```txt
* The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Usage Metadata</strong></summary>
*
* ```typescript
* const aiMsgForMetadata = await llm.invoke(input);
* console.log(aiMsgForMetadata.usage_metadata);
* ```
*
* ```txt
* { input_tokens: 28, output_tokens: 5, total_tokens: 33 }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Logprobs</strong></summary>
*
* ```typescript
* const logprobsLlm = new ChatOpenAI({ logprobs: true });
* const aiMsgForLogprobs = await logprobsLlm.invoke(input);
* console.log(aiMsgForLogprobs.response_metadata.logprobs);
* ```
*
* ```txt
* {
* content: [
* {
* token: 'J',
* logprob: -0.000050616763,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: "'",
* logprob: -0.01868736,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: 'ad',
* logprob: -0.0000030545007,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ore', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: ' la',
* logprob: -0.515404,
* bytes: [Array],
* top_logprobs: []
* },
* {
* token: ' programm',
* logprob: -0.0000118755715,
* bytes: [Array],
* top_logprobs: []
* },
* { token: 'ation', logprob: 0, bytes: [Array], top_logprobs: [] },
* {
* token: '.',
* logprob: -0.0000037697225,
* bytes: [Array],
* top_logprobs: []
* }
* ],
* refusal: null
* }
* ```
* </details>
*
* <br />
*
* <details>
* <summary><strong>Response Metadata</strong></summary>
*
* ```typescript
* const aiMsgForResponseMetadata = await llm.invoke(input);
* console.log(aiMsgForResponseMetadata.response_metadata);
* ```
*
* ```txt
* {
* tokenUsage: { completionTokens: 5, promptTokens: 28, totalTokens: 33 },
* finish_reason: 'stop',
* system_fingerprint: 'fp_3aa7262c27'
* }
* ```
* </details>
*
* <br />
*/

@@ -573,2 +955,13 @@ export class ChatOpenAI extends BaseChatModel {

}
createResponseFormat(resFormat) {
if (resFormat &&
resFormat.type === "json_schema" &&
resFormat.json_schema.schema &&
isZodSchema(resFormat.json_schema.schema)) {
return zodResponseFormat(resFormat.json_schema.schema, resFormat.json_schema.name, {
description: resFormat.json_schema.description,
});
}
return resFormat;
}
/**

@@ -613,3 +1006,3 @@ * Get the parameters used to invoke the model

tool_choice: formatToOpenAIToolChoice(options?.tool_choice),
response_format: options?.response_format,
response_format: this.createResponseFormat(options?.response_format),
seed: options?.seed,

@@ -640,2 +1033,16 @@ ...streamOptionsConfig,

let defaultRole;
if (params.response_format &&
params.response_format.type === "json_schema") {
console.warn(`OpenAI does not yet support streaming with "response_format" set to "json_schema". Falling back to non-streaming mode.`);
const res = await this._generate(messages, options, runManager);
const chunk = new ChatGenerationChunk({
message: new AIMessageChunk({
...res.generations[0].message,
}),
text: res.generations[0].text,
generationInfo: res.generations[0].generationInfo,
});
yield chunk;
return runManager?.handleLLMNewToken(res.generations[0].text ?? "", undefined, undefined, undefined, undefined, { chunk });
}
const streamIterable = await this.completionWithRetry(params, options);

@@ -744,10 +1151,24 @@ let usage;

else {
const data = await this.completionWithRetry({
...params,
stream: false,
messages: messagesMapped,
}, {
signal: options?.signal,
...options?.options,
});
let data;
if (options.response_format &&
options.response_format.type === "json_schema") {
data = await this.betaParsedCompletionWithRetry({
...params,
stream: false,
messages: messagesMapped,
}, {
signal: options?.signal,
...options?.options,
});
}
else {
data = await this.completionWithRetry({
...params,
stream: false,
messages: messagesMapped,
}, {
signal: options?.signal,
...options?.options,
});
}
const { completion_tokens: completionTokens, prompt_tokens: promptTokens, total_tokens: totalTokens, } = data?.usage ?? {};

@@ -897,2 +1318,21 @@ if (completionTokens) {

}
/**
* Call the beta chat completions parse endpoint. This should only be called if
* response_format is set to "json_object".
* @param {OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming} request
* @param {OpenAICoreRequestOptions | undefined} options
*/
async betaParsedCompletionWithRetry(request, options) {
const requestOptions = this._getClientOptions(options);
return this.caller.call(async () => {
try {
const res = await this.client.beta.chat.completions.parse(request, requestOptions);
return res;
}
catch (e) {
const error = wrapOpenAIClientError(e);
throw error;
}
});
}
_getClientOptions(options) {

@@ -990,2 +1430,21 @@ if (!this.client) {

}
else if (method === "jsonSchema") {
llm = this.bind({
response_format: {
type: "json_schema",
json_schema: {
name: name ?? "extract",
description: schema.description,
schema,
strict: config?.strict,
},
},
});
if (isZodSchema(schema)) {
outputParser = StructuredOutputParser.fromZodSchema(schema);
}
else {
outputParser = new JsonOutputParser();
}
}
else {

@@ -992,0 +1451,0 @@ let functionName = name ?? "extract";

import type { OpenAI as OpenAIClient } from "openai";
import type { ResponseFormatText, ResponseFormatJSONObject, ResponseFormatJSONSchema } from "openai/resources/shared";
import { TiktokenModel } from "js-tiktoken/lite";
import type { BaseLanguageModelCallOptions } from "@langchain/core/language_models/base";
import type { z } from "zod";
export type { TiktokenModel };

@@ -182,1 +184,11 @@ export declare interface OpenAIBaseInput {

}
type ChatOpenAIResponseFormatJSONSchema = Omit<ResponseFormatJSONSchema, "json_schema"> & {
json_schema: Omit<ResponseFormatJSONSchema["json_schema"], "schema"> & {
/**
* The schema for the response format, described as a JSON Schema object
* or a Zod object.
*/
schema: Record<string, any> | z.ZodObject<any, any, any, any>;
};
};
export type ChatOpenAIResponseFormat = ResponseFormatText | ResponseFormatJSONObject | ChatOpenAIResponseFormatJSONSchema;

6

package.json
{
"name": "@langchain/openai",
"version": "0.2.6",
"version": "0.2.7",
"description": "OpenAI integrations for LangChain.js",

@@ -38,3 +38,3 @@ "type": "module",

"dependencies": {
"@langchain/core": ">=0.2.21 <0.3.0",
"@langchain/core": ">=0.2.26 <0.3.0",
"js-tiktoken": "^1.0.12",

@@ -48,3 +48,3 @@ "openai": "^4.55.0",

"@jest/globals": "^29.5.0",
"@langchain/scripts": "~0.0.20",
"@langchain/scripts": "^0.0.21",
"@langchain/standard-tests": "0.0.0",

@@ -51,0 +51,0 @@ "@swc/core": "^1.3.90",

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc