VectorVault
VectorVault is a JavaScript library for interacting with the VectorVault Cloud API, which enables you to perform various operations like managing items in your vault vector database, obtaining RAG chat responses, streaming, and much more.
Installation
Install VectorVault via npm:
npm install vectorvault --save
Usage
To use VectorVault, you need to import it and instantiate it with your user details and API keys:
import VectorVault from 'vectorvault';
const user = 'your_email@example.com';
const vault = 'your_vault_name';
const api = 'your_vectorvault_api_key';
const openai_key = 'your_openai_api_key';
const vectorVault = new VectorVault(user, vault, api, openai_key);
Basic Operations
Here are some of the basic operations you can perform:
vectorVault.getChat({ text: 'Your query here' })
.then(response => console.log(response))
.catch(error => console.error(error));
vectorVault.getItems([1])
.then(items => console.log(items))
.catch(error => console.error(error));
vectorVault.addCloud({ text: 'Your text data here' })
.then(response => console.log(response))
.catch(error => console.error(error));
Streaming Chat Responses
The getChatStream
function allows you to stream data from the VectorVault API. It requires two arguments: params
, which is an object containing the parameters for your request, and callback
, which is a function that will be called with each piece of data received from the stream.
Here is how you can use getChatStream
:
function handleStreamedData(data) {
console.log(data);
}
const streamParams = {
text: "Your query here",
};
vectorVault.getChatStream(streamParams, handleStreamedData)
.then(() => console.log("Streaming completed."))
.catch(error => console.error("Streaming error:", error));
The params object can include any of the following properties:
text:
The input text for the chat.
history:
The chat history, if applicable.
summary:
A boolean indicating if the response should be a summary.
get_context:
A boolean to indicate if you want to receive context information.
n_context:
The number of context turns you want to receive.
return_context:
A boolean to include the context in the response.
smart_history_search:
A boolean to enable smart history searching.
model:
The model you want to use, e.g., "gpt-3.5-turbo".
include_context_meta:
A boolean to include metadata about the context.
metatag, metatag_prefixes, metatag_suffixes:
Arrays for advanced context tagging.
custom_prompt:
A custom prompt to be used instead of the default.
temperature:
The creativity temperature.
timeout:
The timeout for the model response wait time.
Make sure to replace "Your query here" with the actual text you want to send to the API.
Please note that getChatStream is an asynchronous function and should be handled with async/await or .then().catch() for proper error handling.