Security News
PyPI Introduces Digital Attestations to Strengthen Python Package Security
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
llamaindex
Advanced tools
[![NPM Version](https://img.shields.io/npm/v/llamaindex)](https://www.npmjs.com/package/llamaindex) [![NPM License](https://img.shields.io/npm/l/llamaindex)](https://www.npmjs.com/package/llamaindex) [![NPM Downloads](https://img.shields.io/npm/dm/llamain
LlamaIndex is a data framework for your LLM application.
Use your own data with large language models (LLMs, OpenAI ChatGPT and others) in Typescript and Javascript.
Documentation: https://ts.llamaindex.ai/
Try examples online:
LlamaIndex.TS aims to be a lightweight, easy to use set of libraries to help you integrate large language models into your applications with your own data.
LlamaIndex.TS supports multiple JS environments, including:
For now, browser support is limited due to the lack of support for AsyncLocalStorage-like APIs
npm install llamaindex
pnpm install llamaindex
yarn add llamaindex
{
compilerOptions: {
// ⬇️ add this line to your tsconfig.json
moduleResolution: "bundler", // or "node16"
},
}
So we are using conditional exports to support all environments.
This is a kind of modern way of shipping packages, but might cause TypeScript type check to fail because of legacy module resolution.
Imaging you put output file into /dist/openai.js
but you are importing llamaindex/openai
in your code, and set package.json
like this:
{
"exports": {
"./openai": "./dist/openai.js"
}
}
In old module resolution, TypeScript will not be able to find the module because it is not follow the file structure, even you run node index.js
successfully. (on Node.js >=16)
See more about moduleResolution or TypeScript 5.0 blog.
import fs from "fs/promises";
import { Document, VectorStoreIndex } from "llamaindex";
async function main() {
// Load essay from abramov.txt in Node
const essay = await fs.readFile(
"node_modules/llamaindex/examples/abramov.txt",
"utf-8",
);
// Create Document object with essay
const document = new Document({ text: essay });
// Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments([document]);
// Query the index
const queryEngine = index.asQueryEngine();
const response = await queryEngine.query({
query: "What did the author do in college?",
});
// Output response
console.log(response.toString());
}
main();
# `pnpm install tsx` before running the script
node --import tsx ./main.ts
First, you will need to add a llamaindex plugin to your Next.js project.
// next.config.js
const withLlamaIndex = require("llamaindex/next");
module.exports = withLlamaIndex({
// your next.js config
});
You can combine ai
with llamaindex
in Next.js with RSC (React Server Components).
// src/apps/page.tsx
"use client";
import { chatWithAgent } from "@/actions";
import type { JSX } from "react";
import { useFormState } from "react-dom";
// You can use the Edge runtime in Next.js by adding this line:
// export const runtime = "edge";
export default function Home() {
const [ui, action] = useFormState<JSX.Element | null>(async () => {
return chatWithAgent("hello!", []);
}, null);
return (
<main>
{ui}
<form action={action}>
<button>Chat</button>
</form>
</main>
);
}
// src/actions/index.ts
"use server";
import { createStreamableUI } from "ai/rsc";
import { OpenAIAgent } from "llamaindex";
import type { ChatMessage } from "llamaindex/llm/types";
export async function chatWithAgent(
question: string,
prevMessages: ChatMessage[] = [],
) {
const agent = new OpenAIAgent({
tools: [
// ... adding your tools here
],
});
const responseStream = await agent.chat({
stream: true,
message: question,
chatHistory: prevMessages,
});
const uiStream = createStreamableUI(<div>loading...</div>);
responseStream
.pipeTo(
new WritableStream({
start: () => {
uiStream.update("response:");
},
write: async (message) => {
uiStream.append(message.response.delta);
},
}),
)
.catch(console.error);
return uiStream.value;
}
Check out our NextJS playground at https://llama-playground.vercel.app/. The source is available at https://github.com/run-llama/ts-playground
Document: A document represents a text file, PDF file or other contiguous piece of data.
Node: The basic data building block. Most commonly, these are parts of the document split into manageable pieces that are small enough to be fed into an embedding model and LLM.
Embedding: Embeddings are sets of floating point numbers which represent the data in a Node. By comparing the similarity of embeddings, we can derive an understanding of the similarity of two pieces of data. One use case is to compare the embedding of a question with the embeddings of our Nodes to see which Nodes may contain the data needed to answer that question. Because the default service context is OpenAI, the default embedding is OpenAIEmbedding
. If using different models, say through Ollama, use this Embedding (see all here).
Indices: Indices store the Nodes and the embeddings of those nodes. QueryEngines retrieve Nodes from these Indices using embedding similarity.
QueryEngine: Query engines are what generate the query you put in and give you back the result. Query engines generally combine a pre-built prompt with selected Nodes from your Index to give the LLM the context it needs to answer your query. To build a query engine from your Index (recommended), use the asQueryEngine
method on your Index. See all query engines here.
ChatEngine: A ChatEngine helps you build a chatbot that will interact with your Indices. See all chat engines here.
SimplePrompt: A simple standardized function call definition that takes in inputs and formats them in a template literal. SimplePrompts can be specialized using currying and combined using other SimplePrompt functions.
When you are importing llamaindex
in a non-Node.js environment(such as React Server Components, Cloudflare Workers, etc.)
Some classes are not exported from top-level entry file.
The reason is that some classes are only compatible with Node.js runtime,(e.g. PDFReader
) which uses Node.js specific APIs(like fs
, child_process
, crypto
).
If you need any of those classes, you have to import them instead directly though their file path in the package.
Here's an example for importing the PineconeVectorStore
class:
import { PineconeVectorStore } from "llamaindex/storage/vectorStore/PineconeVectorStore";
As the PDFReader
is not working with the Edge runtime, here's how to use the SimpleDirectoryReader
with the LlamaParseReader
to load PDFs:
import { SimpleDirectoryReader } from "llamaindex/readers/SimpleDirectoryReader";
import { LlamaParseReader } from "llamaindex/readers/LlamaParseReader";
export const DATA_DIR = "./data";
export async function getDocuments() {
const reader = new SimpleDirectoryReader();
// Load PDFs using LlamaParseReader
return await reader.loadData({
directoryPath: DATA_DIR,
fileExtToReader: {
pdf: new LlamaParseReader({ resultType: "markdown" }),
},
});
}
Note: Reader classes have to be added explictly to the
fileExtToReader
map in the Edge version of theSimpleDirectoryReader
.
You'll find a complete example with LlamaIndexTS here: https://github.com/run-llama/create_llama_projects/tree/main/nextjs-edge-llamaparse
We are in the very early days of LlamaIndex.TS. If you’re interested in hacking on it with us check out our contributing guide
Please join our Discord! https://discord.com/invite/eN6D2HQ4aX
FAQs
LlamaIndex.TS Data framework for your LLM application.
The npm package llamaindex receives a total of 12,971 weekly downloads. As such, llamaindex popularity was classified as popular.
We found that llamaindex demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.