Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

chat-about-video

Package Overview
Dependencies
Maintainers
0
Versions
37
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

chat-about-video

Chat about a video clip using ChatGPT hosted in OpenAI or Azure, or Gemini provided by Google

  • 3.2.1
  • latest
  • Source
  • npm
  • Socket score

Version published
Maintainers
0
Created
Source

chat-about-video

Chat about a video clip (or without the video clip) using the powerful OpenAI ChatGPT (hosted in OpenAI or Microsoft Azure) or Google Gemini (hosted in Google Could).

Version Downloads/week License

chat-about-video is an open-source NPM package designed to accelerate the development of conversation applications about video content. Harnessing the capabilities of ChatGPT from Microsoft Azure or OpenAI, as well as Gemini from Google, this package opens up a range of usage scenarios with minimal effort.

Key features:

  • ChatGPT models hosted in both Azure and OpenAI are supported.
  • Gemini models hosted in Google Cloud are supported.
  • Frame images are extracted from the input video, and uploaded for ChatGPT/Gemini to consume.
  • It can automatically retry on receiving throttling (HTTP status code 429) and error (HTTP status code 5xx) responses from the API.
  • Options supported by the underlying API are exposed for customisation.
  • It can also be used in scenario that no video needs to be involved, that means it can be used for "normal" text chats.

Usage

Installation

To use chat-about-video in your Node.js application, add it as a dependency along with other necessary packages based on your usage scenario. Below are examples for typical setups:

# ChatGPT on OpenAI or Azure with Azure Blob Storage
npm i chat-about-video @azure/openai @ffmpeg-installer/ffmpeg @azure/storage-blob
# Gemini in Google Cloud
npm i chat-about-video @google/generative-ai @ffmpeg-installer/ffmpeg
# ChatGPT on OpenAI or Azure with AWS S3
npm i chat-about-video @azure/openai @ffmpeg-installer/ffmpeg @handy-common-utils/aws-utils @aws-sdk/s3-request-presigner @aws-sdk/client-s3

Optional dependencies

ChatGPT

To use ChatGPT hosted on OpenAI or Azure:

npm i @azure/openai

Gemini

To use Gemini hosted on Google Cloud:

npm i @google/generative-ai

ffmpeg

If you need ffmpeg for extracting video frame images, ensure it is installed. You can use a system package manager or an NPM package:

sudo apt install ffmpeg
# or
npm i @ffmpeg-installer/ffmpeg

Azure Blob Storage

To use Azure Blob Storage for frame images (not needed for Gemini):

npm i @azure/storage-blob

AWS S3

To use AWS S3 for frame images (not needed for Gemini):

npm i @handy-common-utils/aws-utils @aws-sdk/s3-request-presigner @aws-sdk/client-s3

How the video is provided to ChatGPT or Gemini

ChatGPT

There are two approaches for feeding video content to ChatGPT. chat-about-video supports both of them.

Frame image extraction:

  • Integrate ChatGPT from Microsoft Azure or OpenAI effortlessly.
  • Utilize ffmpeg integration provided by this package for frame image extraction or opt for a DIY approach.
  • Store frame images with ease, supporting Azure Blob Storage and AWS S3.
  • GPT-4o and GPT-4 Vision Preview hosted in Azure allows analysis of up to 10 frame images.
  • GPT-4o and GPT-4 Vision Preview hosted in OpenAI allows analysis of more than 10 frame images.

Video indexing with Microsoft Azure:

  • Exclusively supported by GPT-4 Vision Preview from Microsoft Azure.
  • Ingest videos seamlessly into Microsoft Azure's Video Retrieval Index.
  • Automatic extraction of up to 20 frame images using Video Retrieval Indexer.
  • Default integration of speech transcription for enhanced comprehension.
  • Flexible storage options with support for Azure Blob Storage and AWS S3.

Gemini

chat-about-video supports sending Video frames directly to Google's API without a cloud storage.

  • Utilize ffmpeg integration provided by this package for frame image extraction or opt for a DIY approach.
  • Number of frame images is only limited by Gemini API in Google Cloud.

Concrete types and low level clients

ChatAboutVideo and Conversation are generic classes. Use them without concrete generic type parameters when you want the flexibility to easily switch between ChatGPT and Gemini.

Otherwise, you may want to use concrete type. Below are some examples:

// cast to a concrete type
const castToChatGpt = chat as ChatAboutVideoWithChatGpt;

// you can also just leave the ChatAboutVideo instance generic, but narrow down the conversation type
const conversationWithGemini = (await chat.startConversation(...)) as ConversationWithGemini;
const conversationWithChatGpt = await (chat as ChatAboutVideoWithChatGpt).startConversation(...);

To access the underlying API wrapper, use the getApi() function on the ChatAboutVideo instance. To get the raw API client, use the getClient() function on the awaited object returned from getApi().

Cleaning up

Intermediate files, such as extracted frame images, can be saved locally or in the cloud. To remove these files when they are no longer needed, remember to call the end() function on the Conversation instance when the conversion finishes.

Customisation

Frame extraction

If you would like to customise how frame images are extracted and stored, consider these:

  • In the options object passed to the constructor of ChatAboutVideo, there's a property extractVideoFrames. This property allows you to customise how frame images are extracted.
    • format, interval, limit, width, height - These allows you to specify your expectation on the extraction.
    • deleteFilesWhenConversationEnds - This flag allows you to specify whether you want extracted frame images to be deleted from the local file system when the conversation ends, or not.
    • framesDirectoryResolver - You can supply a function for determining where extracted frame image files should be stored locally.
    • extractor - You can supply a function for doing the extraction.
  • In the options object passed to the constructor of ChatAboutVideo, there's a property storage. For ChatGPT, storing frame images in the cloud is recommended. You can use this property to customise how frame images are stored in the cloud.
    • azureStorageConnectionString - If you would like to use Azure Blob Storage, you need to put the connection string in this property. If this property does not have a value, ChatAboutVideo would assume that you'd like to use AWS S3, and default AWS identity/credential will be picked up from the OS.
    • storageContainerName, storagePathPrefix - They allows you to specify where those images should be stored.
    • downloadUrlExpirationSeconds - For images stored in the cloud, presigned download URLs with expiration are generated for ChatGPT to access. This property allows you to control the expiration time.
    • deleteFilesWhenConversationEnds - This flag allows you to specify whether you want extracted frame images to be deleted from the cloud when the conversation ends, or not.
    • uploader - You can supply a function for uploading images into the cloud.

Settings of the underlying model

In the options object passed to the constructor of ChatAboutVideo, there's a property clientSettings, and there's another property completionSettings. Settings of the underlying model can be configured through those two properties.

You can also override settings using the last parameter of startConversation(...) function on ChatAboutVideo, or the last parameter of say(...) function on Conversation.

Code examples

Example 1: Using GPT-4o or GPT-4 Vision Preview hosted in OpenAI with Azure Blob Storage

// This is a demo utilising GPT-4o or Vision preview hosted in OpenAI.
// OpenAI API allows more than 10 (maximum allowed by Azure's OpenAI API) images to be supplied.
// Video frame images are uploaded to Azure Blob Storage and then made available to GPT from there.
//
// This script can be executed with a command line like this from the project root directory:
// export OPENAI_API_KEY=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export OPENAI_MODEL_NAME=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo1.ts
//

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      credential: {
        key: process.env.OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.OPENAI_MODEL_NAME || 'gpt-4o', // 'gpt-4-vision-preview', // or gpt-4o
      },
      extractVideoFrames: {
        limit: 100,
        interval: 2,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

Example 2: Using GPT-4 Vision Preview hosted in Azure with Azure Video Retrieval Indexer

// This is a demo utilising GPT-4 Vision preview hosted in Azure.
// Azure Video Retrieval Indexer is used for extracting information from the input video.
// Information in Azure Video Retrieval Indexer is supplied to GPT.
//
// This script can be executed with a command line like this from the project root directory:
// export AZURE_OPENAI_API_ENDPOINT=..
// export AZURE_OPENAI_API_KEY=...
// export AZURE_OPENAI_DEPLOYMENT_NAME=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// export AZURE_CV_API_KEY=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo2.ts
//

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      endpoint: process.env.AZURE_OPENAI_API_ENDPOINT!,
      credential: {
        key: process.env.AZURE_OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt4vision',
      },
      videoRetrievalIndex: {
        endpoint: process.env.AZURE_CV_API_ENDPOINT!,
        apiKey: process.env.AZURE_CV_API_KEY!,
        createIndexIfNotExists: true,
        deleteIndexWhenConversationEnds: true,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2)), (error as Error).stack));

Example 3: Using GPT-4 Vision Preview hosted in Azure with Azure Blob Storage

// This is a demo utilising GPT-4o or Vision preview hosted in Azure.
// Up to 10 (maximum allowed by Azure's OpenAI API) frames are extracted from the input video.
// Video frame images are uploaded to Azure Blob Storage and then made available to GPT from there.
//
// This script can be executed with a command line like this from the project root directory:
// export AZURE_OPENAI_API_ENDPOINT=..
// export AZURE_OPENAI_API_KEY=...
// export AZURE_OPENAI_DEPLOYMENT_NAME=...
// export AZURE_STORAGE_CONNECTION_STRING=...
// export AZURE_STORAGE_CONTAINER_NAME=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo3.ts

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { ChatAboutVideo, ConversationWithChatGpt } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      endpoint: process.env.AZURE_OPENAI_API_ENDPOINT!,
      credential: {
        key: process.env.AZURE_OPENAI_API_KEY!,
      },
      storage: {
        azureStorageConnectionString: process.env.AZURE_STORAGE_CONNECTION_STRING!,
        storageContainerName: process.env.AZURE_STORAGE_CONTAINER_NAME || 'vision-experiment-input',
        storagePathPrefix: 'video-frames/',
      },
      completionOptions: {
        deploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt4vision',
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithChatGpt;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, { maxTokens: 2000 });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

Example 4: Using Gemini hosted in Google Cloud

// This is a demo utilising Google Gemini through Google Generative Language API.
// Google Gemini allows more than 10 (maximum allowed by Azure's OpenAI API) frame images to be supplied.
// Video frame images are sent through Google Generative Language API directly.
//
// This script can be executed with a command line like this from the project root directory:
// export GEMINI_API_KEY=...
// ENABLE_DEBUG=true DEMO_VIDEO=~/Downloads/test1.mp4 npx ts-node test/demo4.ts

import { consoleWithColour } from '@handy-common-utils/misc-utils';
import chalk from 'chalk';
import readline from 'node:readline';

import { HarmBlockThreshold, HarmCategory } from '@google/generative-ai';

import { ChatAboutVideo, ConversationWithGemini } from 'chat-about-video';

async function demo() {
  const chat = new ChatAboutVideo(
    {
      credential: {
        key: process.env.GEMINI_API_KEY!,
      },
      clientSettings: {
        modelParams: {
          model: 'gemini-1.5-flash',
        },
      },
      extractVideoFrames: {
        limit: 100,
        interval: 0.5,
      },
    },
    consoleWithColour({ debug: process.env.ENABLE_DEBUG === 'true' }, chalk),
  );

  const conversation = (await chat.startConversation(process.env.DEMO_VIDEO!)) as ConversationWithGemini;

  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
  const prompt = (question: string) => new Promise<string>((resolve) => rl.question(question, resolve));
  while (true) {
    const question = await prompt(chalk.red('\nUser: '));
    if (!question) {
      continue;
    }
    if (['exit', 'quit', 'q', 'end'].includes(question)) {
      await conversation.end();
      break;
    }
    const answer = await conversation.say(question, {
      safetySettings: [{ category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE }],
    });
    console.log(chalk.blue('\nAI:' + answer));
  }
  console.log('Demo finished');
  rl.close();
}

demo().catch((error) => console.log(chalk.red(JSON.stringify(error, null, 2))));

API

chat-about-video

Modules

Classes

Class: VideoRetrievalApiClient

azure/video-retrieval-api-client.VideoRetrievalApiClient

Constructors
constructor

new VideoRetrievalApiClient(endpointBaseUrl, apiKey, apiVersion?)

Parameters
NameTypeDefault value
endpointBaseUrlstringundefined
apiKeystringundefined
apiVersionstring'2023-05-01-preview'
Methods
createIndex

createIndex(indexName, indexOptions?): Promise<void>

Parameters
NameType
indexNamestring
indexOptionsCreateIndexOptions
Returns

Promise<void>


createIndexIfNotExist

createIndexIfNotExist(indexName, indexOptions?): Promise<void>

Parameters
NameType
indexNamestring
indexOptions?CreateIndexOptions
Returns

Promise<void>


createIngestion

createIngestion(indexName, ingestionName, ingestion): Promise<void>

Parameters
NameType
indexNamestring
ingestionNamestring
ingestionIngestionRequest
Returns

Promise<void>


deleteDocument

deleteDocument(indexName, documentUrl): Promise<void>

Parameters
NameType
indexNamestring
documentUrlstring
Returns

Promise<void>


deleteIndex

deleteIndex(indexName): Promise<void>

Parameters
NameType
indexNamestring
Returns

Promise<void>


getIndex

getIndex(indexName): Promise<undefined | IndexSummary>

Parameters
NameType
indexNamestring
Returns

Promise<undefined | IndexSummary>


getIngestion

getIngestion(indexName, ingestionName): Promise<IngestionSummary>

Parameters
NameType
indexNamestring
ingestionNamestring
Returns

Promise<IngestionSummary>


ingest

ingest(indexName, ingestionName, ingestion, backoff?): Promise<void>

Parameters
NameType
indexNamestring
ingestionNamestring
ingestionIngestionRequest
backoffnumber[]
Returns

Promise<void>


listDocuments

listDocuments(indexName): Promise<DocumentSummary[]>

Parameters
NameType
indexNamestring
Returns

Promise<DocumentSummary[]>


listIndexes

listIndexes(): Promise<IndexSummary[]>

Returns

Promise<IndexSummary[]>

Class: ChatAboutVideo<CLIENT, OPTIONS, PROMPT, RESPONSE>

chat.ChatAboutVideo

Type parameters
NameType
CLIENTany
OPTIONSextends AdditionalCompletionOptions = any
PROMPTany
RESPONSEany
Constructors
constructor

new ChatAboutVideo<CLIENT, OPTIONS, PROMPT, RESPONSE>(options, log?)

Type parameters
NameType
CLIENTany
OPTIONSextends AdditionalCompletionOptions = any
PROMPTany
RESPONSEany
Parameters
NameType
optionsSupportedChatApiOptions
logundefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void>
Properties
PropertyDescription
Protected apiPromise: Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>>
Protected log: undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void>
Protected options: SupportedChatApiOptions
Methods
getApi

getApi(): Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Get the underlying API instance.

Returns

Promise<ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The underlying API instance.


startConversation

startConversation(options?): Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Start a conversation without a video

Parameters
NameTypeDescription
options?OPTIONSOverriding options for this conversation
Returns

Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The conversation.

startConversation(videoFile, options?): Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

Start a conversation about a video.

Parameters
NameTypeDescription
videoFilestringPath to a video file in local file system.
options?OPTIONSOverriding options for this conversation
Returns

Promise<Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>>

The conversation.

Class: Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>

chat.Conversation

Type parameters
NameType
CLIENTany
OPTIONSextends AdditionalCompletionOptions = any
PROMPTany
RESPONSEany
Constructors
constructor

new Conversation<CLIENT, OPTIONS, PROMPT, RESPONSE>(conversationId, api, prompt, options, cleanup?, log?)

Type parameters
NameType
CLIENTany
OPTIONSextends AdditionalCompletionOptions = any
PROMPTany
RESPONSEany
Parameters
NameType
conversationIdstring
apiChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>
promptundefined | PROMPT
optionsOPTIONS
cleanup?() => Promise<any>
logundefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void>
Properties
PropertyDescription
Protected api: ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>
Protected Optional cleanup: () => Promise<any>
Protected conversationId: string
Protected log: undefined | LineLogger<(message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void, (message?: any, ...optionalParams: any[]) => void>
Protected options: OPTIONS
Protected prompt: undefined | PROMPT
Methods
end

end(): Promise<void>

Returns

Promise<void>


getApi

getApi(): ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>

Get the underlying API instance.

Returns

ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>

The underlying API instance.


getPrompt

getPrompt(): undefined | PROMPT

Get the prompt for the current conversation. The prompt is the accumulated messages in the conversation so far.

Returns

undefined | PROMPT

The prompt which is the accumulated messages in the conversation so far.


say

say(message, options?): Promise<undefined | string>

Say something in the conversation, and get the response from AI

Parameters
NameTypeDescription
messagestringThe message to say in the conversation.
options?Partial<OPTIONS>Options for fine control.
Returns

Promise<undefined | string>

The response/completion

Class: ChatGptApi

chat-gpt.ChatGptApi

Implements
Constructors
constructor

new ChatGptApi(options)

Parameters
NameType
optionsChatGptOptions
Properties
PropertyDescription
Protected client: OpenAIClient
Protected Optional extractVideoFrames: Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">>
Protected options: ChatGptOptions
Protected storage: Required<Pick<StorageOptions, "uploader">> & StorageOptions
Protected tmpDir: string
Protected Optional videoRetrievalIndex: Required<Pick<VideoRetrievalIndexOptions, "createIndexIfNotExists" | "deleteDocumentWhenConversationEnds" | "deleteIndexWhenConversationEnds">> & VideoRetrievalIndexOptions
Methods
appendToPrompt

appendToPrompt(newPromptOrResponse, prompt?): Promise<ChatRequestMessageUnion[]>

Append a new prompt or response to the form a full prompt. This function is useful to build a prompt that contains conversation history.

Parameters
NameTypeDescription
newPromptOrResponseChatCompletions | ChatRequestMessageUnion[]A new prompt to be appended, or previous response to be appended.
prompt?ChatRequestMessageUnion[]The conversation history which is a prompt containing previous prompts and responses. If it is not provided, the conversation history returned will contain only what is in newPromptOrResponse.
Returns

Promise<ChatRequestMessageUnion[]>

The full prompt which is effectively the conversation history.

Implementation of

ChatApi.appendToPrompt


buildTextPrompt

buildTextPrompt(text, _conversationId?): Promise<{ prompt: ChatRequestMessageUnion[] }>

Build prompt for sending text content to AI

Parameters
NameTypeDescription
textstringThe text content to be sent.
_conversationId?stringUnique identifier of the conversation.
Returns

Promise<{ prompt: ChatRequestMessageUnion[] }>

An object containing the prompt.

Implementation of

ChatApi.buildTextPrompt


buildVideoPrompt

buildVideoPrompt(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Build prompt for sending video content to AI. Sometimes, to include video in the conversation, additional options and/or clean up is needed. In such case, options to be passed to generateContent function and/or a clean up call back function will be returned in the output of this function.

Parameters
NameTypeDescription
videoFilestringPath to the video file.
conversationId?stringUnique identifier of the conversation.
Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

An object containing the prompt, optional options, and an optional cleanup function.

Implementation of

ChatApi.buildVideoPrompt


buildVideoPromptWithFrames

Protected buildVideoPromptWithFrames(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Parameters
NameType
videoFilestring
conversationIdstring
Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>


buildVideoPromptWithVideoRetrievalIndex

Protected buildVideoPromptWithVideoRetrievalIndex(videoFile, conversationId?): Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>

Parameters
NameType
videoFilestring
conversationIdstring
Returns

Promise<BuildPromptOutput<ChatRequestMessageUnion[], { deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptions>>


generateContent

generateContent(prompt, options): Promise<ChatCompletions>

Generate content based on the given prompt and options.

Parameters
NameTypeDescription
promptChatRequestMessageUnion[]The full prompt to generate content.
options{ deploymentName: string } & AdditionalCompletionOptions & GetChatCompletionsOptionsOptional options to control the content generation.
Returns

Promise<ChatCompletions>

The generated content.

Implementation of

ChatApi.generateContent


getClient

getClient(): Promise<OpenAIClient>

Get the raw client. This function could be useful for advanced use cases.

Returns

Promise<OpenAIClient>

The raw client.

Implementation of

ChatApi.getClient


getResponseText

getResponseText(result): Promise<undefined | string>

Get the text from the response object

Parameters
NameTypeDescription
resultChatCompletionsthe response object
Returns

Promise<undefined | string>

Implementation of

ChatApi.getResponseText


isServerError

isServerError(error): boolean

Check if the error is a server error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a server error, false otherwise.

Implementation of

ChatApi.isServerError


isThrottlingError

isThrottlingError(error): boolean

Check if the error is a throttling error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a throttling error, false otherwise.

Implementation of

ChatApi.isThrottlingError

Class: GeminiApi

gemini.GeminiApi

Implements
Constructors
constructor

new GeminiApi(options)

Parameters
NameType
optionsGeminiOptions
Properties
PropertyDescription
Protected client: GenerativeModel
Protected extractVideoFrames: Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">>
Protected options: GeminiOptions
Protected tmpDir: string
Methods
appendToPrompt

appendToPrompt(newPromptOrResponse, prompt?): Promise<Content[]>

Append a new prompt or response to the form a full prompt. This function is useful to build a prompt that contains conversation history.

Parameters
NameTypeDescription
newPromptOrResponseContent[] | GenerateContentResultA new prompt to be appended, or previous response to be appended.
prompt?Content[]The conversation history which is a prompt containing previous prompts and responses. If it is not provided, the conversation history returned will contain only what is in newPromptOrResponse.
Returns

Promise<Content[]>

The full prompt which is effectively the conversation history.

Implementation of

ChatApi.appendToPrompt


buildTextPrompt

buildTextPrompt(text, _conversationId?): Promise<{ prompt: Content[] }>

Build prompt for sending text content to AI

Parameters
NameTypeDescription
textstringThe text content to be sent.
_conversationId?stringUnique identifier of the conversation.
Returns

Promise<{ prompt: Content[] }>

An object containing the prompt.

Implementation of

ChatApi.buildTextPrompt


buildVideoPrompt

buildVideoPrompt(videoFile, conversationId?): Promise<BuildPromptOutput<Content[], GeminiCompletionOptions>>

Build prompt for sending video content to AI. Sometimes, to include video in the conversation, additional options and/or clean up is needed. In such case, options to be passed to generateContent function and/or a clean up call back function will be returned in the output of this function.

Parameters
NameTypeDescription
videoFilestringPath to the video file.
conversationIdstringUnique identifier of the conversation.
Returns

Promise<BuildPromptOutput<Content[], GeminiCompletionOptions>>

An object containing the prompt, optional options, and an optional cleanup function.

Implementation of

ChatApi.buildVideoPrompt


generateContent

generateContent(prompt, options): Promise<GenerateContentResult>

Generate content based on the given prompt and options.

Parameters
NameTypeDescription
promptContent[]The full prompt to generate content.
optionsGeminiCompletionOptionsOptional options to control the content generation.
Returns

Promise<GenerateContentResult>

The generated content.

Implementation of

ChatApi.generateContent


getClient

getClient(): Promise<GenerativeModel>

Get the raw client. This function could be useful for advanced use cases.

Returns

Promise<GenerativeModel>

The raw client.

Implementation of

ChatApi.getClient


getResponseText

getResponseText(result): Promise<undefined | string>

Get the text from the response object

Parameters
NameTypeDescription
resultGenerateContentResultthe response object
Returns

Promise<undefined | string>

Implementation of

ChatApi.getResponseText


isServerError

isServerError(error): boolean

Check if the error is a server error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a server error, false otherwise.

Implementation of

ChatApi.isServerError


isThrottlingError

isThrottlingError(error): boolean

Check if the error is a throttling error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a throttling error, false otherwise.

Implementation of

ChatApi.isThrottlingError

Interfaces

Interface: CreateIndexOptions

azure/video-retrieval-api-client.CreateIndexOptions

Properties
PropertyDescription
Optional features: IndexFeature[]
Optional metadataSchema: IndexMetadataSchema
Optional userData: object

Interface: DocumentSummary

azure/video-retrieval-api-client.DocumentSummary

Properties
PropertyDescription
createdDateTime: string
documentId: string
Optional documentUrl: string
lastModifiedDateTime: string
Optional metadata: object
Optional userData: object

Interface: IndexFeature

azure/video-retrieval-api-client.IndexFeature

Properties
PropertyDescription
Optional domain: "surveillance" | "generic"
Optional modelVersion: string
name: "vision" | "speech"

Interface: IndexMetadataSchema

azure/video-retrieval-api-client.IndexMetadataSchema

Properties
PropertyDescription
fields: IndexMetadataSchemaField[]
Optional language: string

Interface: IndexMetadataSchemaField

azure/video-retrieval-api-client.IndexMetadataSchemaField

Properties
PropertyDescription
filterable: boolean
name: string
searchable: boolean
type: "string" | "datetime"

Interface: IndexSummary

azure/video-retrieval-api-client.IndexSummary

Properties
PropertyDescription
createdDateTime: string
eTag: string
Optional features: IndexFeature[]
lastModifiedDateTime: string
name: string
Optional userData: object

Interface: IngestionRequest

azure/video-retrieval-api-client.IngestionRequest

Properties
PropertyDescription
Optional filterDefectedFrames: boolean
Optional generateInsightIntervals: boolean
Optional includeSpeechTranscript: boolean
Optional moderation: boolean
videos: VideoIngestion[]

Interface: IngestionStatusDetail

azure/video-retrieval-api-client.IngestionStatusDetail

Properties
PropertyDescription
documentId: string
documentUrl: string
lastUpdatedTime: string
succeeded: boolean

Interface: IngestionSummary

azure/video-retrieval-api-client.IngestionSummary

Properties
PropertyDescription
Optional batchName: string
createdDateTime: string
Optional fileStatusDetails: IngestionStatusDetail[]
lastModifiedDateTime: string
name: string
state: "NotStarted" | "Running" | "Completed" | "Failed" | "PartiallySucceeded"

Interface: VideoIngestion

azure/video-retrieval-api-client.VideoIngestion

Properties
PropertyDescription
Optional documentId: string
documentUrl: string
Optional metadata: object
mode: "update" | "remove" | "add"
Optional userData: object

Interface: AdditionalCompletionOptions

types.AdditionalCompletionOptions

Properties
PropertyDescription
Optional backoffOnServerError: number[]Array of retry backoff periods (unit: milliseconds) for situations that the server returns 5xx response
Optional backoffOnThrottling: number[]Array of retry backoff periods (unit: milliseconds) for situations that the server returns 429 response
Optional startPromptText: stringThe user prompt that will be sent before the video content.
If not provided, nothing will be sent before the video content.
Optional systemPromptText: stringSystem prompt text. If not provided, a default prompt will be used.

Interface: BuildPromptOutput<PROMPT, OPTIONS>

types.BuildPromptOutput

Type parameters
Name
PROMPT
OPTIONS
Properties
PropertyDescription
Optional cleanup: () => Promise<any>
Optional options: Partial<OPTIONS>
prompt: PROMPT

Interface: ChatApi<CLIENT, OPTIONS, PROMPT, RESPONSE>

types.ChatApi

Type parameters
NameType
CLIENTCLIENT
OPTIONSextends AdditionalCompletionOptions
PROMPTPROMPT
RESPONSERESPONSE
Implemented by
Methods
appendToPrompt

appendToPrompt(newPromptOrResponse, prompt?): Promise<PROMPT>

Append a new prompt or response to the form a full prompt. This function is useful to build a prompt that contains conversation history.

Parameters
NameTypeDescription
newPromptOrResponsePROMPT | RESPONSEA new prompt to be appended, or previous response to be appended.
prompt?PROMPTThe conversation history which is a prompt containing previous prompts and responses. If it is not provided, the conversation history returned will contain only what is in newPromptOrResponse.
Returns

Promise<PROMPT>

The full prompt which is effectively the conversation history.


buildTextPrompt

buildTextPrompt(text, conversationId?): Promise<{ prompt: PROMPT }>

Build prompt for sending text content to AI

Parameters
NameTypeDescription
textstringThe text content to be sent.
conversationId?stringUnique identifier of the conversation.
Returns

Promise<{ prompt: PROMPT }>

An object containing the prompt.


buildVideoPrompt

buildVideoPrompt(videoFile, conversationId?): Promise<BuildPromptOutput<PROMPT, OPTIONS>>

Build prompt for sending video content to AI. Sometimes, to include video in the conversation, additional options and/or clean up is needed. In such case, options to be passed to generateContent function and/or a clean up call back function will be returned in the output of this function.

Parameters
NameTypeDescription
videoFilestringPath to the video file.
conversationId?stringUnique identifier of the conversation.
Returns

Promise<BuildPromptOutput<PROMPT, OPTIONS>>

An object containing the prompt, optional options, and an optional cleanup function.


generateContent

generateContent(prompt, options?): Promise<RESPONSE>

Generate content based on the given prompt and options.

Parameters
NameTypeDescription
promptPROMPTThe full prompt to generate content.
options?OPTIONSOptional options to control the content generation.
Returns

Promise<RESPONSE>

The generated content.


getClient

getClient(): Promise<CLIENT>

Get the raw client. This function could be useful for advanced use cases.

Returns

Promise<CLIENT>

The raw client.


getResponseText

getResponseText(response): Promise<undefined | string>

Get the text from the response object

Parameters
NameTypeDescription
responseRESPONSEthe response object
Returns

Promise<undefined | string>


isServerError

isServerError(error): boolean

Check if the error is a server error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a server error, false otherwise.


isThrottlingError

isThrottlingError(error): boolean

Check if the error is a throttling error.

Parameters
NameTypeDescription
erroranyany error object
Returns

boolean

true if the error is a throttling error, false otherwise.

Interface: ChatApiOptions<CS, CO>

types.ChatApiOptions

Type parameters
Name
CS
CO
Properties

| Property | Description | | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ------- | ------- | ---- | ----- | -------- | --- | | Optional clientSettings: CS | | | Optional completionOptions: AdditionalCompletionOptions & CO | | | credential: Object | Type declaration

| Name | Type |
| :------ | :------ |
| key | string | | | Optional endpoint: string | | | Optional tmpDir: string | Temporary directory for storing temporary files.
If not specified, then the temporary directory of the OS will be used. |

Interface: ExtractVideoFramesOptions

types.ExtractVideoFramesOptions

Properties
PropertyDescription
Optional deleteFilesWhenConversationEnds: booleanWhether files should be deleted when the conversation ends.
Optional extractor: VideoFramesExtractorFunction for extracting frames from the video.
If not specified, a default function using ffmpeg will be used.
Optional format: stringImage format of the extracted frames.
Default value is 'jpg'.
Optional framesDirectoryResolver: (inputFile: string, tmpDir: string, conversationId: string) => stringFunction for determining the directory location for storing extracted frames.
If not specified, a default function will be used.
The function takes three arguments:
Optional height: numberVideo frame height, default is undefined which means the scaling
will be determined by the videoFrameWidth option.
If both videoFrameWidth and videoFrameHeight are not specified,
then the frames will not be resized/scaled.
Optional interval: numberIntervals between frames to be extracted. The unit is second.
Default value is 5.
Optional limit: numberMaximum number of frames to be extracted.
Default value is 10 which is the current per-request limitation of ChatGPT Vision.
Optional width: numberVideo frame width, default is 200.
If both videoFrameWidth and videoFrameHeight are not specified,
then the frames will not be resized/scaled.

Interface: StorageOptions

types.StorageOptions

Properties
PropertyDescription
Optional azureStorageConnectionString: string
Optional deleteFilesWhenConversationEnds: booleanWhether files should be deleted when the conversation ends.
Optional downloadUrlExpirationSeconds: numberExpiration time for the download URL of the frame images in seconds. Default is 3600 seconds.
Optional storageContainerName: stringStorage container for storing frame images of the video.
Optional storagePathPrefix: stringPath prefix to be prepended for storing frame images of the video.
Default is empty.
Optional uploader: FileBatchUploaderFunction for uploading files

Interface: VideoRetrievalIndexOptions

types.VideoRetrievalIndexOptions

Properties
PropertyDescription
apiKey: string
Optional createIndexIfNotExists: boolean
Optional deleteDocumentWhenConversationEnds: boolean
Optional deleteIndexWhenConversationEnds: boolean
endpoint: string
Optional indexName: string## Modules

Module: aws

Functions
createAwsS3FileBatchUploader

createAwsS3FileBatchUploader(s3Client, expirationSeconds, parallelism?): FileBatchUploader

Parameters
NameTypeDefault value
s3ClientS3Clientundefined
expirationSecondsnumberundefined
parallelismnumber3
Returns

FileBatchUploader

Module: azure

References
CreateIndexOptions

Re-exports CreateIndexOptions


DocumentSummary

Re-exports DocumentSummary


IndexFeature

Re-exports IndexFeature


IndexMetadataSchema

Re-exports IndexMetadataSchema


IndexMetadataSchemaField

Re-exports IndexMetadataSchemaField


IndexSummary

Re-exports IndexSummary


IngestionRequest

Re-exports IngestionRequest


IngestionStatusDetail

Re-exports IngestionStatusDetail


IngestionSummary

Re-exports IngestionSummary


Re-exports PaginatedWithNextLink


VideoIngestion

Re-exports VideoIngestion


VideoRetrievalApiClient

Re-exports VideoRetrievalApiClient

Functions
createAzureBlobStorageFileBatchUploader

createAzureBlobStorageFileBatchUploader(blobServiceClient, expirationSeconds, parallelism?): FileBatchUploader

Parameters
NameTypeDefault value
blobServiceClientBlobServiceClientundefined
expirationSecondsnumberundefined
parallelismnumber3
Returns

FileBatchUploader

Module: azure/client-hack

Functions
fixClient

fixClient(openAIClient): void

Parameters
NameType
openAIClientany
Returns

void

Module: azure/video-retrieval-api-client

Classes
Interfaces
Type Aliases

Ƭ PaginatedWithNextLink<T>: Object

Type parameters
Name
T
Type declaration
NameType
nextLink?string
valueT[]

Module: chat

Classes
Type Aliases
ChatAboutVideoWith

Ƭ ChatAboutVideoWith<T>: ChatAboutVideo<ClientOfChatApi<T>, OptionsOfChatApi<T>, PromptOfChatApi<T>, ResponseOfChatApi<T>>

Type parameters
Name
T

ChatAboutVideoWithChatGpt

Ƭ ChatAboutVideoWithChatGpt: ChatAboutVideoWith<ChatGptApi>


ChatAboutVideoWithGemini

Ƭ ChatAboutVideoWithGemini: ChatAboutVideoWith<GeminiApi>


ConversationWith

Ƭ ConversationWith<T>: Conversation<ClientOfChatApi<T>, OptionsOfChatApi<T>, PromptOfChatApi<T>, ResponseOfChatApi<T>>

Type parameters
Name
T

ConversationWithChatGpt

Ƭ ConversationWithChatGpt: ConversationWith<ChatGptApi>


ConversationWithGemini

Ƭ ConversationWithGemini: ConversationWith<GeminiApi>


SupportedChatApiOptions

Ƭ SupportedChatApiOptions: ChatGptOptions | GeminiOptions

Module: chat-gpt

Classes
Type Aliases
ChatGptClient

Ƭ ChatGptClient: OpenAIClient


ChatGptCompletionOptions

Ƭ ChatGptCompletionOptions: { deploymentName: string } & AdditionalCompletionOptions & Parameters<OpenAIClient["getChatCompletions"]>[2]


ChatGptOptions

Ƭ ChatGptOptions: { extractVideoFrames?: ExtractVideoFramesOptions ; storage: StorageOptions ; videoRetrievalIndex?: VideoRetrievalIndexOptions } & ChatApiOptions<OpenAIClientOptions, ChatGptCompletionOptions>


ChatGptPrompt

Ƭ ChatGptPrompt: Parameters<OpenAIClient["getChatCompletions"]>[1]


ChatGptResponse

Ƭ ChatGptResponse: ChatCompletions

Module: gemini

Classes
Type Aliases
GeminiClient

Ƭ GeminiClient: GenerativeModel


GeminiClientOptions

Ƭ GeminiClientOptions: Object

Type declaration
NameType
modelParamsModelParams
requestOptions?RequestOptions

GeminiCompletionOptions

Ƭ GeminiCompletionOptions: AdditionalCompletionOptions & Omit<GenerateContentRequest, "contents">


GeminiOptions

Ƭ GeminiOptions: { clientSettings: GeminiClientOptions ; extractVideoFrames: ExtractVideoFramesOptions } & ChatApiOptions<GeminiClientOptions, GeminiCompletionOptions>


GeminiPrompt

Ƭ GeminiPrompt: GenerateContentRequest["contents"]


GeminiResponse

Ƭ GeminiResponse: GenerateContentResult

Module: index

References
AdditionalCompletionOptions

Re-exports AdditionalCompletionOptions


BuildPromptOutput

Re-exports BuildPromptOutput


ChatAboutVideo

Re-exports ChatAboutVideo


ChatAboutVideoWith

Re-exports ChatAboutVideoWith


ChatAboutVideoWithChatGpt

Re-exports ChatAboutVideoWithChatGpt


ChatAboutVideoWithGemini

Re-exports ChatAboutVideoWithGemini


ChatApi

Re-exports ChatApi


ChatApiOptions

Re-exports ChatApiOptions


ClientOfChatApi

Re-exports ClientOfChatApi


Conversation

Re-exports Conversation


ConversationWith

Re-exports ConversationWith


ConversationWithChatGpt

Re-exports ConversationWithChatGpt


ConversationWithGemini

Re-exports ConversationWithGemini


ExtractVideoFramesOptions

Re-exports ExtractVideoFramesOptions


FileBatchUploader

Re-exports FileBatchUploader


OptionsOfChatApi

Re-exports OptionsOfChatApi


PromptOfChatApi

Re-exports PromptOfChatApi


ResponseOfChatApi

Re-exports ResponseOfChatApi


StorageOptions

Re-exports StorageOptions


SupportedChatApiOptions

Re-exports SupportedChatApiOptions


VideoFramesExtractor

Re-exports VideoFramesExtractor


VideoRetrievalIndexOptions

Re-exports VideoRetrievalIndexOptions


extractVideoFramesWithFfmpeg

Re-exports extractVideoFramesWithFfmpeg


lazyCreatedFileBatchUploader

Re-exports lazyCreatedFileBatchUploader


lazyCreatedVideoFramesExtractor

Re-exports lazyCreatedVideoFramesExtractor

Module: storage

References
FileBatchUploader

Re-exports FileBatchUploader

Functions
lazyCreatedFileBatchUploader

lazyCreatedFileBatchUploader(creator): FileBatchUploader

Parameters
NameType
creatorPromise<FileBatchUploader>
Returns

FileBatchUploader

Module: storage/types

Type Aliases
FileBatchUploader

Ƭ FileBatchUploader: (dir: string, relativePaths: string[], containerName: string, blobPathPrefix: string) => Promise<{ cleanup: () => Promise<any> ; downloadUrls: string[] }>

Type declaration

▸ (dir, relativePaths, containerName, blobPathPrefix): Promise<{ cleanup: () => Promise<any> ; downloadUrls: string[] }>

Function that uploads files to the cloud storage.

####### Parameters

NameTypeDescription
dirstringThe directory path where the files are located.
relativePathsstring[]An array of relative paths of the files to be uploaded.
containerNamestringThe name of the container where the files will be uploaded.
blobPathPrefixstringThe prefix for the blob paths (file paths) in the container.

####### Returns

Promise<{ cleanup: () => Promise<any> ; downloadUrls: string[] }>

A Promise that resolves with an object containing an array of download URLs for the uploaded files and a cleanup function to remove the uploaded files from the container.

Module: types

Interfaces
Type Aliases
ClientOfChatApi

Ƭ ClientOfChatApi<T>: T extends ChatApi<infer CLIENT, any, any, any> ? CLIENT : never

Type parameters
Name
T

OptionsOfChatApi

Ƭ OptionsOfChatApi<T>: T extends ChatApi<any, infer OPTIONS, any, any> ? OPTIONS : never

Type parameters
Name
T

PromptOfChatApi

Ƭ PromptOfChatApi<T>: T extends ChatApi<any, any, infer PROMPT, any> ? PROMPT : never

Type parameters
Name
T

ResponseOfChatApi

Ƭ ResponseOfChatApi<T>: T extends ChatApi<any, any, any, infer RESPONSE> ? RESPONSE : never

Type parameters
Name
T

Module: utils

Functions
effectiveExtractVideoFramesOptions

effectiveExtractVideoFramesOptions(options?): Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">>

Calculate the effective values for ExtractVideoFramesOptions by combining the default values and the values provided

Parameters
NameTypeDescription
options?ExtractVideoFramesOptionsthe options containing the values provided
Returns

Pick<ExtractVideoFramesOptions, "height"> & Required<Omit<ExtractVideoFramesOptions, "height">>

The effective values for ExtractVideoFramesOptions


effectiveStorageOptions

effectiveStorageOptions(options): Required<Pick<StorageOptions, "uploader">> & StorageOptions

Calculate the effective values for StorageOptions by combining the default values and the values provided

Parameters
NameTypeDescription
optionsStorageOptionsthe options containing the values provided
Returns

Required<Pick<StorageOptions, "uploader">> & StorageOptions

The effective values for StorageOptions


effectiveVideoRetrievalIndexOptions

effectiveVideoRetrievalIndexOptions(options): Required<Pick<VideoRetrievalIndexOptions, "createIndexIfNotExists" | "deleteDocumentWhenConversationEnds" | "deleteIndexWhenConversationEnds">> & VideoRetrievalIndexOptions

Calculate the effective values for VideoRetrievalIndexOptions by combining the default values and the values provided

Parameters
NameTypeDescription
optionsVideoRetrievalIndexOptionsthe options containing the values provided
Returns

Required<Pick<VideoRetrievalIndexOptions, "createIndexIfNotExists" | "deleteDocumentWhenConversationEnds" | "deleteIndexWhenConversationEnds">> & VideoRetrievalIndexOptions

The effective values for VideoRetrievalIndexOptions

Module: video

References
VideoFramesExtractor

Re-exports VideoFramesExtractor


extractVideoFramesWithFfmpeg

Re-exports extractVideoFramesWithFfmpeg

Functions
lazyCreatedVideoFramesExtractor

lazyCreatedVideoFramesExtractor(creator): VideoFramesExtractor

Parameters
NameType
creatorPromise<VideoFramesExtractor>
Returns

VideoFramesExtractor

Module: video/ffmpeg

Functions
extractVideoFramesWithFfmpeg

extractVideoFramesWithFfmpeg(inputFile, outputDir, intervalSec, format?, width?, height?, startSec?, endSec?, limit?): Promise<{ cleanup: () => Promise<any> ; relativePaths: string[] }>

Function that extracts frame images from a video file.

Parameters
NameTypeDescription
inputFilestringPath to the input video file.
outputDirstringPath to the output directory where frame images will be saved.
intervalSecnumberInterval in seconds between each frame extraction.
format?stringFormat of the output frame images (e.g., 'jpg', 'png').
width?numberWidth of the output frame images in pixels.
height?numberHeight of the output frame images in pixels.
startSec?numberStart time of the video segment to extract in seconds, inclusive.
endSec?numberEnd time of the video segment to extract in seconds, exclusive.
limit?numberMaximum number of frames to extract.
Returns

Promise<{ cleanup: () => Promise<any> ; relativePaths: string[] }>

An object containing an array of relative paths to the extracted frame images and a cleanup function for deleting those files.

Module: video/types

Type Aliases
VideoFramesExtractor

Ƭ VideoFramesExtractor: (inputFile: string, outputDir: string, intervalSec: number, format?: string, width?: number, height?: number, startSec?: number, endSec?: number, limit?: number) => Promise<{ cleanup: () => Promise<any> ; relativePaths: string[] }>

Type declaration

▸ (inputFile, outputDir, intervalSec, format?, width?, height?, startSec?, endSec?, limit?): Promise<{ cleanup: () => Promise<any> ; relativePaths: string[] }>

Function that extracts frame images from a video file.

####### Parameters

NameTypeDescription
inputFilestringPath to the input video file.
outputDirstringPath to the output directory where frame images will be saved.
intervalSecnumberInterval in seconds between each frame extraction.
format?stringFormat of the output frame images (e.g., 'jpg', 'png').
width?numberWidth of the output frame images in pixels.
height?numberHeight of the output frame images in pixels.
startSec?numberStart time of the video segment to extract in seconds, inclusive.
endSec?numberEnd time of the video segment to extract in seconds, exclusive.
limit?numberMaximum number of frames to extract.

####### Returns

Promise<{ cleanup: () => Promise<any> ; relativePaths: string[] }>

An object containing an array of relative paths to the extracted frame images and a cleanup function for deleting those files.

Keywords

FAQs

Package last updated on 31 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc