Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
@singlestore/ai
Advanced tools
A module that enhances the [`@singlestore/client`](https://github.com/singlestore-labs/singlestore/tree/main/packages/client) package with AI functionality, allowing you to integrate AI features like embeddings and chat completions.
A module that enhances the @singlestore/client
package with AI functionality, allowing you to integrate AI features like embeddings and chat completions.
npm install @singlestore/ai
The AI
class can be initialized in various ways depending on your requirements. You can start with the default setup, or extend it with custom managers for embeddings and chat completions, or even add custom tools.
This is the simplest way to initialize the AI
class, using an OpenAI API key.
import { AI } from "@singlestore/ai";
const ai = new AI({ openAIApiKey: "<OPENAI_API_KEY>" });
You can define a custom embeddings manager by extending the EmbeddingsManager
class to handle how embeddings are created and models are selected.
import { type CreateEmbeddingsParams, type Embedding, EmbeddingsManager } from "@singlestore/ai";
class CustomEmbeddingsManager extends EmbeddingsManager {
getModels(): string[] {
return ["<MODEL_NAME>"];
}
async create(input: string | string[], params?: CreateEmbeddingsParams): Promise<Embedding[]> {
const embeddings: Embedding[] = await customFnCall();
return embeddings;
}
}
const ai = new AI({
openAIApiKey: "<OPENAI_API_KEY>",
embeddingsManager: new CustomEmbeddingsManager(),
});
You can define a custom chat completions manager by extending the ChatCompletionsManager
class. This allows you to modify how chat completions are handled, whether in a streaming or non-streaming fashion.
import {
type AnyChatCompletionTool,
ChatCompletionsManager,
type CreateChatCompletionParams,
type CreateChatCompletionResult,
type MergeChatCompletionTools,
} from "@singlestore/ai";
type ChatCompletionTools = undefined; // If an array of custom tools is created, use `typeof tools`.
class CustomChatCompletionsManager extends ChatCompletionsManager<ChatCompletionTools> {
getModels(): Promise<string[]> | string[] {
return ["<MODEL_NAME>"];
}
create<TStream extends boolean, TTools extends AnyChatCompletionTool[] | undefined>(
params: CreateChatCompletionParams<TStream, MergeChatCompletionTools<ChatCompletionTools, TTools>>,
): Promise<CreateChatCompletionResult<TStream>> {
if (params.stream) {
const stream = customFnCall();
return stream as Promise<CreateChatCompletionResult<TStream>>;
}
const chatCompletion = await customFnCall();
return chatCompletion as Promise<CreateChatCompletionResult<TStream>>;
}
}
const ai = new AI({
openAIApiKey: "<OPENAI_API_KEY>",
chatCompletionsManager: new CustomChatCompletionsManager(),
});
You can also create custom tools to extend the functionality of the chat completions by defining them with the ChatCompletionTool
class.
import { ChatCompletionTool } from "@singlestore/ai";
import { z } from "zod";
const customTool = new ChatCompletionTool({
name: "<TOOL_NAME>",
description: "<TOOL_DESCRIPTION>",
params: z.object({ paramName: z.string().describe("<PARAM_DESCRIPTION>") }),
call: async (params) => {
const value = await anyFnCall(params);
return { name: "<TOOL_NAME>", params, value: JSON.stringify(value) };
},
});
const ai = new AI({
tools: [customTool],
...
});
You can define a custom text splitter by extending the TextSplitter
class to handle how text is split.
import { AI, TextSplitter, TextSplitterSplitOptions } from "@singlestore/ai";
class CustomTextSplitter extends TextSplitter {
split(text: string, options?: TextSplitterSplitOptions): string[] {
return [];
}
}
const ai = new AI({ textSplitter: new CustomTextSplitter() });
openAIApiKey
parameter is not required.const models = ai.embeddings.getModels();
const embeddings = await ai.embeddings.create("<INPUT>", {
model: "<MODEL_NAME>", // Optional
dimensions: "<DIMENSION>", // Optional
});
const embeddings = await ai.embeddings.create(["<INPUT>", "<INPUT_2>"], ...);
EmbeddingsManager
is provided, all the parameters can still be passed to the ai.embeddings.create
method, allowing for custom handling and logic while preserving the same interface.const models = ai.chatCompletions.getModels();
The create
method allows you to generate chat completions either as a complete string or in a streamed fashion, depending on the stream
option.
Performs a chat completion and returns the result as a complete string.
const chatCompletion = await ai.chatCompletions.create({
stream: false,
prompt: "<PROMPT>",
model: "<MODEL_NAME>", // Optional
systemRole: "<SYSTEM_ROLE>", // Optional
messages: [{ role: "user", content: "<CONTENT>" }], // Optional
});
Performs a chat completion and returns the result as a stream of data chunks.
const stream = await ai.chatCompletions.create({
stream: true,
prompt: "<PROMPT>",
model: "<MODEL_NAME>", // Optional
systemRole: "<SYSTEM_ROLE>", // Optional
messages: [{ role: "user", content: "<CONTENT>" }], // Optional
tools: [...] // Optional
});
const chatCompletion = await ai.chatCompletions.handleStream(stream, async (chunk) => {
await customFnCall(chunk);
});
stream: true
, the handleStream
function processes the stream and accepts a callback function as the second argument. The callback handles each new chunk of data as it arrives.ChatCompletionsManager
is provided, all the parameters can still be passed to the ai.chatCompletions.create
method, allowing for custom handling and logic while preserving the same interface.Breaks a given text into smaller chunks, making it easier to handle for tasks like generating embeddings. By default, it splits text by sentences, but you can customize it to use a different delimiter or set the maximum chunk size.
const chunks = ai.textSplitter.split(
text,
{
chunkSize: 1024, // Optional; 1024 be deafult
delimiter: " ", // Optional; Sentence splitting by default
}, // Optional
);
FAQs
A module that enhances the [`@singlestore/client`](https://github.com/singlestore-labs/singlestore/tree/main/packages/client) package with AI functionality, allowing you to integrate AI features like embeddings and chat completions.
We found that @singlestore/ai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.