Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

openai

Package Overview
Dependencies
Maintainers
5
Versions
210
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

openai - npm Package Compare versions

Comparing version 4.46.1 to 4.47.0

2

package.json
{
"name": "openai",
"version": "4.46.1",
"version": "4.47.0",
"description": "The official TypeScript library for the OpenAI API",

@@ -5,0 +5,0 @@ "author": "OpenAI <support@openai.com>",

@@ -22,3 +22,3 @@ # OpenAI Node API Library

```ts
import OpenAI from 'https://deno.land/x/openai@v4.46.1/mod.ts';
import OpenAI from 'https://deno.land/x/openai@v4.47.0/mod.ts';
```

@@ -25,0 +25,0 @@

@@ -157,5 +157,7 @@ import * as Core from "../core.js";

* The endpoint to be used for all requests in the batch. Currently
* `/v1/chat/completions` and `/v1/embeddings` are supported.
* `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
* Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
* embedding inputs across all requests in the batch.
*/
endpoint: '/v1/chat/completions' | '/v1/embeddings';
endpoint: '/v1/chat/completions' | '/v1/embeddings' | '/v1/completions';
/**

@@ -169,3 +171,4 @@ * The ID of an uploaded file that contains requests for the new batch.

* [JSONL file](https://platform.openai.com/docs/api-reference/batch/requestInput),
* and must be uploaded with the purpose `batch`.
* and must be uploaded with the purpose `batch`. The file can contain up to 50,000
* requests, and can be up to 100 MB in size.
*/

@@ -172,0 +175,0 @@ input_file_id: string;

@@ -87,4 +87,5 @@ import * as Core from "../../core.js";

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -811,4 +812,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -941,4 +943,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -945,0 +948,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -210,4 +210,5 @@ import * as Core from "../../../../core.js";

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -232,4 +233,4 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* The status of the run, which can be either `queued`, `in_progress`,
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or
* `expired`.
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`,
* `incomplete`, or `expired`.
*/

@@ -366,6 +367,6 @@ status: RunStatus;

* The status of the run, which can be either `queued`, `in_progress`,
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or
* `expired`.
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`,
* `incomplete`, or `expired`.
*/
export type RunStatus = 'queued' | 'in_progress' | 'requires_action' | 'cancelling' | 'cancelled' | 'failed' | 'completed' | 'expired';
export type RunStatus = 'queued' | 'in_progress' | 'requires_action' | 'cancelling' | 'cancelled' | 'failed' | 'completed' | 'incomplete' | 'expired';
export type RunCreateParams = RunCreateParamsNonStreaming | RunCreateParamsStreaming;

@@ -427,4 +428,5 @@ export interface RunCreateParamsBase {

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -638,4 +640,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -802,4 +805,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -966,4 +970,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -970,0 +975,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -140,2 +140,3 @@ "use strict";

case 'requires_action':
case 'incomplete':
case 'cancelled':

@@ -142,0 +143,0 @@ case 'completed':

@@ -62,4 +62,5 @@ import * as Core from "../../../core.js";

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -379,4 +380,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -675,4 +677,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -947,4 +950,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -951,0 +955,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -88,2 +88,3 @@ "use strict";

case 'failed':
case 'cancelled':
case 'completed':

@@ -90,0 +91,0 @@ return batch;

@@ -9,11 +9,15 @@ import * as Core from "../core.js";

/**
* Upload a file that can be used across various endpoints. The size of all the
* files uploaded by one organization can be up to 100 GB.
* Upload a file that can be used across various endpoints. Individual files can be
* up to 512 MB, and the size of all files uploaded by one organization can be up
* to 100 GB.
*
* The size of individual files can be a maximum of 512 MB or 2 million tokens for
* Assistants. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
* learn more about the types of files supported. The Fine-tuning API only supports
* `.jsonl` files.
* The Assistants API supports files up to 2 million tokens and of specific file
* types. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) for
* details.
*
* The Fine-tuning API only supports `.jsonl` files.
*
* The Batch API only supports `.jsonl` files up to 100 MB in size.
*
* Please [contact us](https://help.openai.com/) if you need to increase these

@@ -20,0 +24,0 @@ * storage limits.

@@ -37,11 +37,15 @@ "use strict";

/**
* Upload a file that can be used across various endpoints. The size of all the
* files uploaded by one organization can be up to 100 GB.
* Upload a file that can be used across various endpoints. Individual files can be
* up to 512 MB, and the size of all files uploaded by one organization can be up
* to 100 GB.
*
* The size of individual files can be a maximum of 512 MB or 2 million tokens for
* Assistants. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
* learn more about the types of files supported. The Fine-tuning API only supports
* `.jsonl` files.
* The Assistants API supports files up to 2 million tokens and of specific file
* types. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) for
* details.
*
* The Fine-tuning API only supports `.jsonl` files.
*
* The Batch API only supports `.jsonl` files up to 100 MB in size.
*
* Please [contact us](https://help.openai.com/) if you need to increase these

@@ -48,0 +52,0 @@ * storage limits.

@@ -218,5 +218,7 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

* The endpoint to be used for all requests in the batch. Currently
* `/v1/chat/completions` and `/v1/embeddings` are supported.
* `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
* Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
* embedding inputs across all requests in the batch.
*/
endpoint: '/v1/chat/completions' | '/v1/embeddings';
endpoint: '/v1/chat/completions' | '/v1/embeddings' | '/v1/completions';

@@ -231,3 +233,4 @@ /**

* [JSONL file](https://platform.openai.com/docs/api-reference/batch/requestInput),
* and must be uploaded with the purpose `batch`.
* and must be uploaded with the purpose `batch`. The file can contain up to 50,000
* requests, and can be up to 100 MB in size.
*/

@@ -234,0 +237,0 @@ input_file_id: string;

@@ -147,4 +147,5 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1051,4 +1052,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1198,4 +1200,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1202,0 +1205,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -179,2 +179,3 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

case 'requires_action':
case 'incomplete':
case 'cancelled':

@@ -413,4 +414,5 @@ case 'completed':

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -437,4 +439,4 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* The status of the run, which can be either `queued`, `in_progress`,
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or
* `expired`.
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`,
* `incomplete`, or `expired`.
*/

@@ -590,4 +592,4 @@ status: RunStatus;

* The status of the run, which can be either `queued`, `in_progress`,
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or
* `expired`.
* `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`,
* `incomplete`, or `expired`.
*/

@@ -602,2 +604,3 @@ export type RunStatus =

| 'completed'
| 'incomplete'
| 'expired';

@@ -692,4 +695,5 @@

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -954,4 +958,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1162,4 +1167,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1370,4 +1376,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1374,0 +1381,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -133,4 +133,5 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -520,4 +521,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -880,4 +882,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1212,4 +1215,5 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

* Specifies the format that the model must output. Compatible with
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
* all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
* [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
* [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4),
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
*

@@ -1216,0 +1220,0 @@ * Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the

@@ -141,2 +141,3 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

case 'failed':
case 'cancelled':
case 'completed':

@@ -143,0 +144,0 @@ return batch;

@@ -15,11 +15,15 @@ // File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

/**
* Upload a file that can be used across various endpoints. The size of all the
* files uploaded by one organization can be up to 100 GB.
* Upload a file that can be used across various endpoints. Individual files can be
* up to 512 MB, and the size of all files uploaded by one organization can be up
* to 100 GB.
*
* The size of individual files can be a maximum of 512 MB or 2 million tokens for
* Assistants. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
* learn more about the types of files supported. The Fine-tuning API only supports
* `.jsonl` files.
* The Assistants API supports files up to 2 million tokens and of specific file
* types. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) for
* details.
*
* The Fine-tuning API only supports `.jsonl` files.
*
* The Batch API only supports `.jsonl` files up to 100 MB in size.
*
* Please [contact us](https://help.openai.com/) if you need to increase these

@@ -26,0 +30,0 @@ * storage limits.

@@ -1,1 +0,1 @@

export const VERSION = '4.46.1'; // x-release-please-version
export const VERSION = '4.47.0'; // x-release-please-version

@@ -1,2 +0,2 @@

export declare const VERSION = "4.46.1";
export declare const VERSION = "4.47.0";
//# sourceMappingURL=version.d.ts.map
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.VERSION = void 0;
exports.VERSION = '4.46.1'; // x-release-please-version
exports.VERSION = '4.47.0'; // x-release-please-version
//# sourceMappingURL=version.js.map

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc