Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

cohere-ai

Package Overview
Dependencies
Maintainers
5
Versions
83
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

cohere-ai - npm Package Compare versions

Comparing version 7.8.0 to 7.9.0

api/errors/ServiceUnavailableError.d.ts

35

api/client/requests/ChatRequest.d.ts

@@ -16,5 +16,2 @@ /**

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -39,9 +36,15 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

/**
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
*
* The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
*
*/
preamble?: string;
/**
* A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's `message`.
* A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
*
* Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*
*/

@@ -130,3 +133,10 @@ chatHistory?: Cohere.ChatMessage[];

p?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. */
seed?: number;
/**
* A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
*
*/
stopSequences?: string[];
/**
* Defaults to `0.0`, min value of `0.0`, max value of `1.0`.

@@ -150,4 +160,3 @@ *

*
* When `tools` is passed, The `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made
* the `tool_calls` array will be empty.
* When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
*

@@ -157,5 +166,6 @@ */

/**
* A list of results from invoking tools. Results are used to generate text and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
*
* **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
* ```

@@ -165,6 +175,6 @@ * tool_results = [

* "call": {
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* },

@@ -178,2 +188,3 @@ * "outputs": [{

* ```
* **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
*

@@ -180,0 +191,0 @@ */

@@ -19,9 +19,15 @@ /**

/**
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
*
* The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
*
*/
preamble?: string;
/**
* A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's `message`.
* A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
*
* Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*
*/

@@ -110,3 +116,10 @@ chatHistory?: Cohere.ChatMessage[];

p?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. */
seed?: number;
/**
* A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
*
*/
stopSequences?: string[];
/**
* Defaults to `0.0`, min value of `0.0`, max value of `1.0`.

@@ -130,4 +143,3 @@ *

*
* When `tools` is passed, The `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made
* the `tool_calls` array will be empty.
* When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
*

@@ -137,5 +149,6 @@ */

/**
* A list of results from invoking tools. Results are used to generate text and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
*
* **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
* ```

@@ -145,6 +158,6 @@ * tool_results = [

* "call": {
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* },

@@ -158,2 +171,3 @@ * "outputs": [{

* ```
* **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
*

@@ -160,0 +174,0 @@ */

@@ -8,3 +8,3 @@ /**

* {
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -40,3 +40,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -43,0 +43,0 @@ * }

@@ -7,3 +7,3 @@ /**

* {
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* }

@@ -10,0 +10,0 @@ */

@@ -53,2 +53,4 @@ /**

temperature?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed. */
seed?: number;
/**

@@ -55,0 +57,0 @@ * Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).

@@ -45,2 +45,4 @@ /**

temperature?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed. */
seed?: number;
/**

@@ -47,0 +49,0 @@ * Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).

@@ -10,3 +10,3 @@ /**

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* }

@@ -13,0 +13,0 @@ */

export * from "./BadRequestError";
export * from "./UnauthorizedError";
export * from "./ForbiddenError";

@@ -6,1 +7,2 @@ export * from "./NotFoundError";

export * from "./InternalServerError";
export * from "./ServiceUnavailableError";

@@ -18,2 +18,3 @@ "use strict";

__exportStar(require("./BadRequestError"), exports);
__exportStar(require("./UnauthorizedError"), exports);
__exportStar(require("./ForbiddenError"), exports);

@@ -23,1 +24,2 @@ __exportStar(require("./NotFoundError"), exports);

__exportStar(require("./InternalServerError"), exports);
__exportStar(require("./ServiceUnavailableError"), exports);

@@ -82,3 +82,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -157,3 +157,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -231,3 +231,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -305,3 +305,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -381,3 +381,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -462,3 +462,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -465,0 +465,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -75,6 +75,6 @@ "use strict";

if (limit != null) {
_queryParams["limit"] = limit;
_queryParams["limit"] = limit.toString();
}
if (offset != null) {
_queryParams["offset"] = offset;
_queryParams["offset"] = offset.toString();
}

@@ -91,3 +91,3 @@ const _response = yield core.fetcher({

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -189,3 +189,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -255,3 +255,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -319,3 +319,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -383,3 +383,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -386,0 +386,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -24,7 +24,7 @@ /**

*/
limit?: string;
limit?: number;
/**
* optional offset to start of results
*/
offset?: string;
offset?: number;
}

@@ -74,3 +74,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -148,3 +148,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -220,3 +220,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -293,3 +293,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -296,0 +296,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -5,2 +5,3 @@ export * as embedJobs from "./embedJobs";

export * from "./datasets/types";
export * as finetuning from "./finetuning";
export * as connectors from "./connectors";

@@ -12,1 +13,2 @@ export * as models from "./models";

export * from "./models/client/requests";
export * from "./finetuning/client/requests";

@@ -29,3 +29,3 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.models = exports.connectors = exports.datasets = exports.embedJobs = void 0;
exports.models = exports.connectors = exports.finetuning = exports.datasets = exports.embedJobs = void 0;
exports.embedJobs = __importStar(require("./embedJobs"));

@@ -35,2 +35,3 @@ __exportStar(require("./embedJobs/types"), exports);

__exportStar(require("./datasets/types"), exports);
exports.finetuning = __importStar(require("./finetuning"));
exports.connectors = __importStar(require("./connectors"));

@@ -42,1 +43,2 @@ exports.models = __importStar(require("./models"));

__exportStar(require("./models/client/requests"), exports);
__exportStar(require("./finetuning/client/requests"), exports);

@@ -83,3 +83,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -86,0 +86,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -6,13 +6,11 @@ /**

/**
* A single message in a chat history. Contains the role of the sender, the text contents of the message.
* Represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*/
export interface ChatMessage {
/** One of CHATBOT|USER to identify who the message is coming from. */
/** One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from. */
role: Cohere.ChatMessageRole;
/** Contents of the chat message. */
message: string;
/** Unique identifier for the generated reply. Useful for submitting feedback. */
generationId?: string;
/** Unique identifier for the response. */
responseId?: string;
}

@@ -5,8 +5,9 @@ /**

/**
* One of CHATBOT|USER to identify who the message is coming from.
* One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from.
*/
export declare type ChatMessageRole = "CHATBOT" | "USER";
export declare type ChatMessageRole = "CHATBOT" | "SYSTEM" | "USER";
export declare const ChatMessageRole: {
readonly Chatbot: "CHATBOT";
readonly System: "SYSTEM";
readonly User: "USER";
};

@@ -9,3 +9,4 @@ "use strict";

Chatbot: "CHATBOT",
System: "SYSTEM",
User: "USER",
};

@@ -34,3 +34,2 @@ export * from "./ChatStreamRequestPromptTruncation";

export * from "./DetokenizeResponse";
export * from "./ToolCall";
export * from "./ChatMessageRole";

@@ -42,2 +41,3 @@ export * from "./ChatMessage";

export * from "./Tool";
export * from "./ToolCall";
export * from "./ChatCitation";

@@ -44,0 +44,0 @@ export * from "./ChatSearchQuery";

@@ -50,3 +50,2 @@ "use strict";

__exportStar(require("./DetokenizeResponse"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatMessageRole"), exports);

@@ -58,2 +57,3 @@ __exportStar(require("./ChatMessage"), exports);

__exportStar(require("./Tool"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatCitation"), exports);

@@ -60,0 +60,0 @@ __exportStar(require("./ChatSearchQuery"), exports);

@@ -12,3 +12,2 @@ /**

parameters: Record<string, unknown>;
generationId: string;
}

@@ -11,2 +11,3 @@ /**

import { Models } from "./api/resources/models/client/Client";
import { Finetuning } from "./api/resources/finetuning/client/Client";
export declare namespace CohereClient {

@@ -46,5 +47,2 @@ interface Options {

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -57,7 +55,15 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
*/
generateStream(request: Cohere.GenerateStreamRequest, requestOptions?: CohereClient.RequestOptions): Promise<core.Stream<Cohere.GenerateStreamedResponse>>;
/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
* @throws {@link Cohere.BadRequestError}

@@ -94,3 +100,3 @@ * @throws {@link Cohere.TooManyRequestsError}

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* })

@@ -108,3 +114,3 @@ */

* await cohere.classify({
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -140,3 +146,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -147,3 +153,7 @@ * })

/**
* This endpoint generates a summary in English for a given text.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates a summary in English for a given text.
* @throws {@link Cohere.TooManyRequestsError}

@@ -176,3 +186,3 @@ *

* await cohere.detokenize({
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* })

@@ -189,3 +199,5 @@ */

get models(): Models;
protected _finetuning: Finetuning | undefined;
get finetuning(): Finetuning;
protected _getAuthorizationHeader(): Promise<string>;
}

@@ -52,2 +52,3 @@ "use strict";

const Client_4 = require("./api/resources/models/client/Client");
const Client_5 = require("./api/resources/finetuning/client/Client");
class CohereClient {

@@ -74,3 +75,3 @@ constructor(_options = {}) {

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -141,5 +142,2 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -163,3 +161,3 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -209,3 +207,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
*/

@@ -225,3 +227,3 @@ generateStream(request, requestOptions) {

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -282,3 +284,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
* @throws {@link Cohere.BadRequestError}

@@ -308,3 +314,3 @@ * @throws {@link Cohere.TooManyRequestsError}

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -380,3 +386,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -437,3 +443,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* })

@@ -454,3 +460,3 @@ */

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -508,3 +514,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* await cohere.classify({
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -540,3 +546,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -558,3 +564,3 @@ * })

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -608,3 +614,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates a summary in English for a given text.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates a summary in English for a given text.
* @throws {@link Cohere.TooManyRequestsError}

@@ -630,3 +640,3 @@ *

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -700,3 +710,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -755,3 +765,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* await cohere.detokenize({
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* })

@@ -772,3 +782,3 @@ */

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -833,2 +843,6 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

}
get finetuning() {
var _a;
return ((_a = this._finetuning) !== null && _a !== void 0 ? _a : (this._finetuning = new Client_5.Finetuning(this._options)));
}
_getAuthorizationHeader() {

@@ -835,0 +849,0 @@ var _a;

@@ -16,5 +16,2 @@ /**

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -39,9 +36,15 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

/**
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
*
* The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
*
*/
preamble?: string;
/**
* A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's `message`.
* A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
*
* Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*
*/

@@ -130,3 +133,10 @@ chatHistory?: Cohere.ChatMessage[];

p?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. */
seed?: number;
/**
* A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
*
*/
stopSequences?: string[];
/**
* Defaults to `0.0`, min value of `0.0`, max value of `1.0`.

@@ -150,4 +160,3 @@ *

*
* When `tools` is passed, The `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made
* the `tool_calls` array will be empty.
* When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
*

@@ -157,5 +166,6 @@ */

/**
* A list of results from invoking tools. Results are used to generate text and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
*
* **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
* ```

@@ -165,6 +175,6 @@ * tool_results = [

* "call": {
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* },

@@ -178,2 +188,3 @@ * "outputs": [{

* ```
* **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
*

@@ -180,0 +191,0 @@ */

@@ -19,9 +19,15 @@ /**

/**
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.
* When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
*
* The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
*
*/
preamble?: string;
/**
* A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's `message`.
* A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
*
* Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*
*/

@@ -110,3 +116,10 @@ chatHistory?: Cohere.ChatMessage[];

p?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. */
seed?: number;
/**
* A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
*
*/
stopSequences?: string[];
/**
* Defaults to `0.0`, min value of `0.0`, max value of `1.0`.

@@ -130,4 +143,3 @@ *

*
* When `tools` is passed, The `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made
* the `tool_calls` array will be empty.
* When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
*

@@ -137,5 +149,6 @@ */

/**
* A list of results from invoking tools. Results are used to generate text and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
* Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
*
* **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
* ```

@@ -145,6 +158,6 @@ * tool_results = [

* "call": {
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* "name": <tool name>,
* "parameters": {
* <param name>: <param value>
* }
* },

@@ -158,2 +171,3 @@ * "outputs": [{

* ```
* **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
*

@@ -160,0 +174,0 @@ */

@@ -8,3 +8,3 @@ /**

* {
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -40,3 +40,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -43,0 +43,0 @@ * }

@@ -7,3 +7,3 @@ /**

* {
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* }

@@ -10,0 +10,0 @@ */

@@ -53,2 +53,4 @@ /**

temperature?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed. */
seed?: number;
/**

@@ -55,0 +57,0 @@ * Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).

@@ -45,2 +45,4 @@ /**

temperature?: number;
/** If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed. */
seed?: number;
/**

@@ -47,0 +49,0 @@ * Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).

@@ -10,3 +10,3 @@ /**

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* }

@@ -13,0 +13,0 @@ */

export * from "./BadRequestError";
export * from "./UnauthorizedError";
export * from "./ForbiddenError";

@@ -6,1 +7,2 @@ export * from "./NotFoundError";

export * from "./InternalServerError";
export * from "./ServiceUnavailableError";

@@ -18,2 +18,3 @@ "use strict";

__exportStar(require("./BadRequestError"), exports);
__exportStar(require("./UnauthorizedError"), exports);
__exportStar(require("./ForbiddenError"), exports);

@@ -23,1 +24,2 @@ __exportStar(require("./NotFoundError"), exports);

__exportStar(require("./InternalServerError"), exports);
__exportStar(require("./ServiceUnavailableError"), exports);

@@ -82,3 +82,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -157,3 +157,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -231,3 +231,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -305,3 +305,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -381,3 +381,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -462,3 +462,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -465,0 +465,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -75,6 +75,6 @@ "use strict";

if (limit != null) {
_queryParams["limit"] = limit;
_queryParams["limit"] = limit.toString();
}
if (offset != null) {
_queryParams["offset"] = offset;
_queryParams["offset"] = offset.toString();
}

@@ -91,3 +91,3 @@ const _response = yield core.fetcher({

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -189,3 +189,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -255,3 +255,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -319,3 +319,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -383,3 +383,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -386,0 +386,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -24,7 +24,7 @@ /**

*/
limit?: string;
limit?: number;
/**
* optional offset to start of results
*/
offset?: string;
offset?: number;
}

@@ -74,3 +74,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -148,3 +148,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -220,3 +220,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -293,3 +293,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -296,0 +296,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -5,2 +5,3 @@ export * as embedJobs from "./embedJobs";

export * from "./datasets/types";
export * as finetuning from "./finetuning";
export * as connectors from "./connectors";

@@ -12,1 +13,2 @@ export * as models from "./models";

export * from "./models/client/requests";
export * from "./finetuning/client/requests";

@@ -29,3 +29,3 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.models = exports.connectors = exports.datasets = exports.embedJobs = void 0;
exports.models = exports.connectors = exports.finetuning = exports.datasets = exports.embedJobs = void 0;
exports.embedJobs = __importStar(require("./embedJobs"));

@@ -35,2 +35,3 @@ __exportStar(require("./embedJobs/types"), exports);

__exportStar(require("./datasets/types"), exports);
exports.finetuning = __importStar(require("./finetuning"));
exports.connectors = __importStar(require("./connectors"));

@@ -42,1 +43,2 @@ exports.models = __importStar(require("./models"));

__exportStar(require("./models/client/requests"), exports);
__exportStar(require("./finetuning/client/requests"), exports);

@@ -83,3 +83,3 @@ "use strict";

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -86,0 +86,0 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

@@ -6,13 +6,11 @@ /**

/**
* A single message in a chat history. Contains the role of the sender, the text contents of the message.
* Represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
*
* The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
*/
export interface ChatMessage {
/** One of CHATBOT|USER to identify who the message is coming from. */
/** One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from. */
role: Cohere.ChatMessageRole;
/** Contents of the chat message. */
message: string;
/** Unique identifier for the generated reply. Useful for submitting feedback. */
generationId?: string;
/** Unique identifier for the response. */
responseId?: string;
}

@@ -5,8 +5,9 @@ /**

/**
* One of CHATBOT|USER to identify who the message is coming from.
* One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from.
*/
export declare type ChatMessageRole = "CHATBOT" | "USER";
export declare type ChatMessageRole = "CHATBOT" | "SYSTEM" | "USER";
export declare const ChatMessageRole: {
readonly Chatbot: "CHATBOT";
readonly System: "SYSTEM";
readonly User: "USER";
};

@@ -9,3 +9,4 @@ "use strict";

Chatbot: "CHATBOT",
System: "SYSTEM",
User: "USER",
};

@@ -34,3 +34,2 @@ export * from "./ChatStreamRequestPromptTruncation";

export * from "./DetokenizeResponse";
export * from "./ToolCall";
export * from "./ChatMessageRole";

@@ -42,2 +41,3 @@ export * from "./ChatMessage";

export * from "./Tool";
export * from "./ToolCall";
export * from "./ChatCitation";

@@ -44,0 +44,0 @@ export * from "./ChatSearchQuery";

@@ -50,3 +50,2 @@ "use strict";

__exportStar(require("./DetokenizeResponse"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatMessageRole"), exports);

@@ -58,2 +57,3 @@ __exportStar(require("./ChatMessage"), exports);

__exportStar(require("./Tool"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatCitation"), exports);

@@ -60,0 +60,0 @@ __exportStar(require("./ChatSearchQuery"), exports);

@@ -12,3 +12,2 @@ /**

parameters: Record<string, unknown>;
generationId: string;
}

@@ -11,2 +11,3 @@ /**

import { Models } from "./api/resources/models/client/Client";
import { Finetuning } from "./api/resources/finetuning/client/Client";
export declare namespace CohereClient {

@@ -46,5 +47,2 @@ interface Options {

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -57,7 +55,15 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
*/
generateStream(request: Cohere.GenerateStreamRequest, requestOptions?: CohereClient.RequestOptions): Promise<core.Stream<Cohere.GenerateStreamedResponse>>;
/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
* @throws {@link Cohere.BadRequestError}

@@ -94,3 +100,3 @@ * @throws {@link Cohere.TooManyRequestsError}

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* })

@@ -108,3 +114,3 @@ */

* await cohere.classify({
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -140,3 +146,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -147,3 +153,7 @@ * })

/**
* This endpoint generates a summary in English for a given text.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates a summary in English for a given text.
* @throws {@link Cohere.TooManyRequestsError}

@@ -176,3 +186,3 @@ *

* await cohere.detokenize({
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* })

@@ -189,3 +199,5 @@ */

get models(): Models;
protected _finetuning: Finetuning | undefined;
get finetuning(): Finetuning;
protected _getAuthorizationHeader(): Promise<string>;
}

@@ -52,2 +52,3 @@ "use strict";

const Client_4 = require("./api/resources/models/client/Client");
const Client_5 = require("./api/resources/finetuning/client/Client");
class CohereClient {

@@ -74,3 +75,3 @@ constructor(_options = {}) {

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -141,5 +142,2 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* message: "How can I help you today?"
* }, {
* role: Cohere.ChatMessageRole.Chatbot,
* message: "message"
* }],

@@ -163,3 +161,3 @@ * promptTruncation: Cohere.ChatRequestPromptTruncation.Off,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -209,3 +207,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
*/

@@ -225,3 +227,3 @@ generateStream(request, requestOptions) {

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -282,3 +284,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates realistic text conditioned on a given input.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates realistic text conditioned on a given input.
* @throws {@link Cohere.BadRequestError}

@@ -308,3 +314,3 @@ * @throws {@link Cohere.TooManyRequestsError}

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -380,3 +386,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -437,3 +443,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* query: "What is the capital of the United States?",
* documents: []
* documents: ["Carson City is the capital city of the American state of Nevada.", "The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean. Its capital is Saipan.", "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district.", "Capital punishment (the death penalty) has existed in the United States since beforethe United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."]
* })

@@ -454,3 +460,3 @@ */

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -508,3 +514,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* await cohere.classify({
* inputs: ["Confirm your email address", "hey i need u to send some $", "inputs"],
* inputs: ["Confirm your email address", "hey i need u to send some $"],
* examples: [{

@@ -540,3 +546,3 @@ * text: "Dermatologists don't like her!",

* label: "Not spam"
* }, {}],
* }],
* preset: "my-preset-a58sbd"

@@ -558,3 +564,3 @@ * })

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -608,3 +614,7 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

/**
* This endpoint generates a summary in English for a given text.
* > 🚧 Warning
* >
* > This API is marked as "Legacy" and is no longer maintained. Follow the [migration guide](/docs/migrating-from-cogenerate-to-cochat) to start using the Chat API.
*
* Generates a summary in English for a given text.
* @throws {@link Cohere.TooManyRequestsError}

@@ -630,3 +640,3 @@ *

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -700,3 +710,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -755,3 +765,3 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

* await cohere.detokenize({
* tokens: [10104, 12221, 1315, 34, 1420, 69, 1]
* tokens: [10104, 12221, 1315, 34, 1420, 69]
* })

@@ -772,3 +782,3 @@ */

"X-Fern-SDK-Name": "cohere-ai",
"X-Fern-SDK-Version": "7.8.0",
"X-Fern-SDK-Version": "7.9.0",
"X-Fern-Runtime": core.RUNTIME.type,

@@ -833,2 +843,6 @@ "X-Fern-Runtime-Version": core.RUNTIME.version,

}
get finetuning() {
var _a;
return ((_a = this._finetuning) !== null && _a !== void 0 ? _a : (this._finetuning = new Client_5.Finetuning(this._options)));
}
_getAuthorizationHeader() {

@@ -835,0 +849,0 @@ var _a;

@@ -23,2 +23,4 @@ /**

p?: number | null;
seed?: number | null;
stop_sequences?: string[] | null;
frequency_penalty?: number | null;

@@ -25,0 +27,0 @@ presence_penalty?: number | null;

@@ -60,2 +60,4 @@ "use strict";

p: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
stopSequences: core.serialization.property("stop_sequences", core.serialization.list(core.serialization.string()).optional()),
frequencyPenalty: core.serialization.property("frequency_penalty", core.serialization.number().optional()),

@@ -62,0 +64,0 @@ presencePenalty: core.serialization.property("presence_penalty", core.serialization.number().optional()),

@@ -23,2 +23,4 @@ /**

p?: number | null;
seed?: number | null;
stop_sequences?: string[] | null;
frequency_penalty?: number | null;

@@ -25,0 +27,0 @@ presence_penalty?: number | null;

@@ -60,2 +60,4 @@ "use strict";

p: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
stopSequences: core.serialization.property("stop_sequences", core.serialization.list(core.serialization.string()).optional()),
frequencyPenalty: core.serialization.property("frequency_penalty", core.serialization.number().optional()),

@@ -62,0 +64,0 @@ presencePenalty: core.serialization.property("presence_penalty", core.serialization.number().optional()),

@@ -16,2 +16,3 @@ /**

temperature?: number | null;
seed?: number | null;
preset?: string | null;

@@ -18,0 +19,0 @@ end_sequences?: string[] | null;

@@ -47,2 +47,3 @@ "use strict";

temperature: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
preset: core.serialization.string().optional(),

@@ -49,0 +50,0 @@ endSequences: core.serialization.property("end_sequences", core.serialization.list(core.serialization.string()).optional()),

@@ -16,2 +16,3 @@ /**

temperature?: number | null;
seed?: number | null;
preset?: string | null;

@@ -18,0 +19,0 @@ end_sequences?: string[] | null;

@@ -47,2 +47,3 @@ "use strict";

temperature: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
preset: core.serialization.string().optional(),

@@ -49,0 +50,0 @@ endSequences: core.serialization.property("end_sequences", core.serialization.list(core.serialization.string()).optional()),

@@ -5,4 +5,6 @@ export * as embedJobs from "./embedJobs";

export * from "./datasets/types";
export * as finetuning from "./finetuning";
export * from "./embedJobs/client/requests";
export * as connectors from "./connectors";
export * from "./connectors/client/requests";
export * from "./finetuning/client/requests";

@@ -29,3 +29,3 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.connectors = exports.datasets = exports.embedJobs = void 0;
exports.connectors = exports.finetuning = exports.datasets = exports.embedJobs = void 0;
exports.embedJobs = __importStar(require("./embedJobs"));

@@ -35,4 +35,6 @@ __exportStar(require("./embedJobs/types"), exports);

__exportStar(require("./datasets/types"), exports);
exports.finetuning = __importStar(require("./finetuning"));
__exportStar(require("./embedJobs/client/requests"), exports);
exports.connectors = __importStar(require("./connectors"));
__exportStar(require("./connectors/client/requests"), exports);
__exportStar(require("./finetuning/client/requests"), exports);

@@ -12,5 +12,3 @@ /**

message: string;
generation_id?: string | null;
response_id?: string | null;
}
}

@@ -43,4 +43,2 @@ "use strict";

message: core.serialization.string(),
generationId: core.serialization.property("generation_id", core.serialization.string().optional()),
responseId: core.serialization.property("response_id", core.serialization.string().optional()),
});

@@ -9,3 +9,3 @@ /**

export declare namespace ChatMessageRole {
type Raw = "CHATBOT" | "USER";
type Raw = "CHATBOT" | "SYSTEM" | "USER";
}

@@ -31,2 +31,2 @@ "use strict";

const core = __importStar(require("../../core"));
exports.ChatMessageRole = core.serialization.enum_(["CHATBOT", "USER"]);
exports.ChatMessageRole = core.serialization.enum_(["CHATBOT", "SYSTEM", "USER"]);

@@ -34,3 +34,2 @@ export * from "./ChatStreamRequestPromptTruncation";

export * from "./DetokenizeResponse";
export * from "./ToolCall";
export * from "./ChatMessageRole";

@@ -42,2 +41,3 @@ export * from "./ChatMessage";

export * from "./Tool";
export * from "./ToolCall";
export * from "./ChatCitation";

@@ -44,0 +44,0 @@ export * from "./ChatSearchQuery";

@@ -50,3 +50,2 @@ "use strict";

__exportStar(require("./DetokenizeResponse"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatMessageRole"), exports);

@@ -58,2 +57,3 @@ __exportStar(require("./ChatMessage"), exports);

__exportStar(require("./Tool"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatCitation"), exports);

@@ -60,0 +60,0 @@ __exportStar(require("./ChatSearchQuery"), exports);

@@ -12,4 +12,3 @@ /**

parameters: Record<string, unknown>;
generation_id: string;
}
}

@@ -34,3 +34,2 @@ "use strict";

parameters: core.serialization.record(core.serialization.string(), core.serialization.unknown()),
generationId: core.serialization.property("generation_id", core.serialization.string()),
});

@@ -238,2 +238,3 @@ "use strict";

"numberOfSales": "120",
"tool_name": "sales_database",
"totalRevenue": "48500",

@@ -240,0 +241,0 @@ },

{
"name": "cohere-ai",
"version": "7.8.0",
"version": "7.9.0",
"private": false,

@@ -5,0 +5,0 @@ "repository": "https://github.com/cohere-ai/cohere-typescript",

@@ -23,2 +23,4 @@ /**

p?: number | null;
seed?: number | null;
stop_sequences?: string[] | null;
frequency_penalty?: number | null;

@@ -25,0 +27,0 @@ presence_penalty?: number | null;

@@ -60,2 +60,4 @@ "use strict";

p: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
stopSequences: core.serialization.property("stop_sequences", core.serialization.list(core.serialization.string()).optional()),
frequencyPenalty: core.serialization.property("frequency_penalty", core.serialization.number().optional()),

@@ -62,0 +64,0 @@ presencePenalty: core.serialization.property("presence_penalty", core.serialization.number().optional()),

@@ -23,2 +23,4 @@ /**

p?: number | null;
seed?: number | null;
stop_sequences?: string[] | null;
frequency_penalty?: number | null;

@@ -25,0 +27,0 @@ presence_penalty?: number | null;

@@ -60,2 +60,4 @@ "use strict";

p: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
stopSequences: core.serialization.property("stop_sequences", core.serialization.list(core.serialization.string()).optional()),
frequencyPenalty: core.serialization.property("frequency_penalty", core.serialization.number().optional()),

@@ -62,0 +64,0 @@ presencePenalty: core.serialization.property("presence_penalty", core.serialization.number().optional()),

@@ -16,2 +16,3 @@ /**

temperature?: number | null;
seed?: number | null;
preset?: string | null;

@@ -18,0 +19,0 @@ end_sequences?: string[] | null;

@@ -47,2 +47,3 @@ "use strict";

temperature: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
preset: core.serialization.string().optional(),

@@ -49,0 +50,0 @@ endSequences: core.serialization.property("end_sequences", core.serialization.list(core.serialization.string()).optional()),

@@ -16,2 +16,3 @@ /**

temperature?: number | null;
seed?: number | null;
preset?: string | null;

@@ -18,0 +19,0 @@ end_sequences?: string[] | null;

@@ -47,2 +47,3 @@ "use strict";

temperature: core.serialization.number().optional(),
seed: core.serialization.number().optional(),
preset: core.serialization.string().optional(),

@@ -49,0 +50,0 @@ endSequences: core.serialization.property("end_sequences", core.serialization.list(core.serialization.string()).optional()),

@@ -5,4 +5,6 @@ export * as embedJobs from "./embedJobs";

export * from "./datasets/types";
export * as finetuning from "./finetuning";
export * from "./embedJobs/client/requests";
export * as connectors from "./connectors";
export * from "./connectors/client/requests";
export * from "./finetuning/client/requests";

@@ -29,3 +29,3 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.connectors = exports.datasets = exports.embedJobs = void 0;
exports.connectors = exports.finetuning = exports.datasets = exports.embedJobs = void 0;
exports.embedJobs = __importStar(require("./embedJobs"));

@@ -35,4 +35,6 @@ __exportStar(require("./embedJobs/types"), exports);

__exportStar(require("./datasets/types"), exports);
exports.finetuning = __importStar(require("./finetuning"));
__exportStar(require("./embedJobs/client/requests"), exports);
exports.connectors = __importStar(require("./connectors"));
__exportStar(require("./connectors/client/requests"), exports);
__exportStar(require("./finetuning/client/requests"), exports);

@@ -12,5 +12,3 @@ /**

message: string;
generation_id?: string | null;
response_id?: string | null;
}
}

@@ -43,4 +43,2 @@ "use strict";

message: core.serialization.string(),
generationId: core.serialization.property("generation_id", core.serialization.string().optional()),
responseId: core.serialization.property("response_id", core.serialization.string().optional()),
});

@@ -9,3 +9,3 @@ /**

export declare namespace ChatMessageRole {
type Raw = "CHATBOT" | "USER";
type Raw = "CHATBOT" | "SYSTEM" | "USER";
}

@@ -31,2 +31,2 @@ "use strict";

const core = __importStar(require("../../core"));
exports.ChatMessageRole = core.serialization.enum_(["CHATBOT", "USER"]);
exports.ChatMessageRole = core.serialization.enum_(["CHATBOT", "SYSTEM", "USER"]);

@@ -34,3 +34,2 @@ export * from "./ChatStreamRequestPromptTruncation";

export * from "./DetokenizeResponse";
export * from "./ToolCall";
export * from "./ChatMessageRole";

@@ -42,2 +41,3 @@ export * from "./ChatMessage";

export * from "./Tool";
export * from "./ToolCall";
export * from "./ChatCitation";

@@ -44,0 +44,0 @@ export * from "./ChatSearchQuery";

@@ -50,3 +50,2 @@ "use strict";

__exportStar(require("./DetokenizeResponse"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatMessageRole"), exports);

@@ -58,2 +57,3 @@ __exportStar(require("./ChatMessage"), exports);

__exportStar(require("./Tool"), exports);
__exportStar(require("./ToolCall"), exports);
__exportStar(require("./ChatCitation"), exports);

@@ -60,0 +60,0 @@ __exportStar(require("./ChatSearchQuery"), exports);

@@ -12,4 +12,3 @@ /**

parameters: Record<string, unknown>;
generation_id: string;
}
}

@@ -34,3 +34,2 @@ "use strict";

parameters: core.serialization.record(core.serialization.string(), core.serialization.unknown()),
generationId: core.serialization.property("generation_id", core.serialization.string()),
});

@@ -238,2 +238,3 @@ "use strict";

"numberOfSales": "120",
"tool_name": "sales_database",
"totalRevenue": "48500",

@@ -240,0 +241,0 @@ },

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc