
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
vorion-sdk
Advanced tools
A comprehensive TypeScript SDK for interacting with the VORION Rag & LLM Apis, supporting both REST and WebSocket interfaces.
Vorion SDK is a comprehensive TypeScript SDK for interacting with the VORION RAG & LLM APIs. It supports both REST and WebSocket interfaces and provides an extensible solution for server-side applications.
npm install vorion-sdk
import {
VorionRAGSDK,
VorionLLMSDK,
VorionWebSocket,
createVorionServer,
} from "vorion-sdk";
const ragSdk = new VorionRAGSDK("https://your-rag-api-base-url.com");
// Example: Embedding documents
const embedResult = await ragSdk.embed({
documents: ["Your document content here"],
embedder_name: EmbedderOptions.Azure,
});
// Example: Ingesting documents
const ingestResult = await ragSdk.ingest({
data_sources: [
{
type: "txt",
target: "path/to/your/file.txt",
metadata: { subject: "Example" },
parameters: {},
},
],
embedder_name: EmbedderOptions.Azure,
vectorstore_name: VectorStoreOptions.Elastic,
collection_name: "your_collection_name",
preferred_splitter_type: SplitterTypeOptions.Recursive,
});
// Example: Retrieving documents
const retrieveResult = await ragSdk.retrieve({
embedder_name: EmbedderOptions.Azure,
vectorstore_name: VectorStoreOptions.Elastic,
collection_name: "your_collection_name",
query: "Your search query",
k: 5,
});
const llmSdk = new VorionLLMSDK("https://your-llm-api-base-url.com");
// Example: Making a prediction
const predictResult = await llmSdk.predict({
conversation_state_key: "unique_conversation_id",
prompt: {
text: "Your prompt text here",
sensitive_info: false,
},
llm_name: LLMOptions.OpenAI,
llm_group_name: LLMGroupNameOptions["gpt-4"],
memory_strategy_name: MemoryStrategyOptions.FullSummarize,
memory_type: MemoryOptions.Redis,
load_balancer_strategy_name:
LoadBalanceStrategyOptions.DynamicWeightedRoundRobin,
});
// Example: Using Dapr for prediction
import { DaprClient } from "@dapr/dapr";
const daprClient = new DaprClient();
const predictResultWithDapr = await llmSdk.predict(
{
// ... prediction request parameters ...
},
true, // Set to true to use Dapr
daprClient
);
// Example: Using a basic AI agent
const agentResult = await llmSdk.agentBasic({
team_id: "your_team_id",
assistant_name: "Assistant",
assistant_sys_message: "You are a helpful assistant.",
task: "Your task description",
conversation_state_key: "unique_conversation_id",
llm_name: LLMOptions.OpenAI,
llm_group_name: LLMGroupNameOptions["gpt-4"],
load_balancer_strategy_name: LoadBalanceStrategyOptions.RoundRobin,
});
const socket = VorionWebSocket(
"wss://your-websocket-url.com",
"unique-session-id.c"
);
socket.on(VorionEvents.PREDICTION_COMPLETE, (payload) => {
// console.log('Prediction complete:', payload);
});
socket.on(VorionEvents.INGEST_DOCUMENTS_SUCCEEDED, (payload) => {
// console.log('Document ingestion succeeded:', payload);
});
import { createVorionServer, DaprEvents } from "vorion-sdk";
const { start, app } = createVorionServer({
port: 3000,
wsServerResponses: {
[DaprEvents.PREDICTION_COMPLETE]: async (data) => {
// Handle prediction completion
return {
event: "custom-prediction-event",
payload: { result: data.answer },
};
},
// Other event handlers...
},
listenCallback: () => {
// console.log('Vorion Server started');
},
});
// Add custom routes or logic
app.get("/custom-route", () => "Custom route response");
// Start the server
start();
The SDK provides methods corresponding to various API endpoints:
embed: Embed documents or queriesaembed: Asynchronous embeddingingest: Ingest documents into the vector storeaingest: Asynchronous document ingestioningestMultipart: Ingest documents using multipart form dataaingestMultipart: Asynchronous multipart document ingestionqueryIngestState: Query the state of an ingestion taskrollbackIngest: Rollback a document ingestionload: Load documentsaload: Asynchronous document loadingretrieve: Retrieve relevant documentspredict: Make a prediction using the LLMapredict: Asynchronous predictioncomparePredict: Compare predictions from different modelsagentBasic: Use a basic AI agentagentAbasic: Asynchronous basic AI agentagentTeam: Use a team of AI agentsgetHistory: Retrieve conversation historygetLlmConfig: Get LLM configurationgetFile: Retrieve a specific filegetAllFiles: Retrieve all files for a conversationThe SDK includes a rich set of types and enums for better type safety and autocompletion:
EmbedRequest, EmbedResponse: Types for embedding requests and responsesIngestRequest, IngestResponse: Types for document ingestionRetrieveRequest, RetrieveResponse: Types for document retrievalLoadRequest, LoadResultModel: Types for document loadingPredictRequest, PredictResponse: Types for LLM predictionsAgentBasicRequest, AgentBasicResponse: Types for basic agent interactionsGetHistoryRequest, GetHistoryResponse: Types for retrieving conversation historyEmbedderOptions: Options for embedding models (e.g., Azure, OpenAI)VectorStoreOptions: Options for vector stores (e.g., Redis, Chroma)LLMOptions: Options for language models (e.g., Azure, OpenAI, Google)LLMGroupNameOptions: Options for LLM group names (e.g., gpt-4, claude-3-5-sonnet)LoadBalanceStrategyOptions: Options for load balancing strategiesMemoryOptions: Options for memory types (e.g., InMemory, Redis)MemoryStrategyOptions: Options for memory strategies (e.g., FullSummarize, RemoveTop)SplitterTypeOptions: Options for text splitting methodsVorionEvents: Enum of WebSocket events (e.g., PREDICTION_COMPLETE, INGEST_DOCUMENTS_SUCCEEDED)VorionServerParams: Type for server creation parameterswsServerResponses: Type for WebSocket server response handlersAll SDK methods return a promise that resolves to an object with the following structure:
{
isSuccess: boolean;
errors?: ApiError;
response?: ResponseType;
code: number | null;
createdAt: Date;
}
Handle potential errors by checking the isSuccess flag and the errors object.
The createVorionServer function allows you to create an Elysia-based server that can be extended to handle WebSocket events and add custom logic:
const { start, app } = createVorionServer({
port: 3000,
wsServerResponses: {
// Define event handlers here
},
});
// Add custom routes or logic
app.get("/custom-route", () => "Custom route response");
// Start the server
start();
This project is licensed under the ISC License.
FAQs
A comprehensive TypeScript SDK for interacting with the VORION Rag & LLM Apis, supporting both REST and WebSocket interfaces.
The npm package vorion-sdk receives a total of 1 weekly downloads. As such, vorion-sdk popularity was classified as not popular.
We found that vorion-sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.