ποΈ LlamaIndex π¦

LlamaIndex OSS (by LlamaIndex) is an open-source framework to build agentic applications. Parse is our enterprise platform for agentic OCR, parsing, extraction, indexing and more. You can use LlamaParse with this framework or on its own; see LlamaParse below for signup and product links.
π Documentation:
Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex in
Python:
-
Starter: llama-index. A starter Python package that includes core LlamaIndex as well as a selection of integrations.
-
Customized: llama-index-core. Install core LlamaIndex and add your chosen LlamaIndex integration packages on LlamaHub
that are required for your application. There are over 300 LlamaIndex integration
packages that work seamlessly with core, allowing you to build with your preferred
LLM, embedding, and vector store providers.
The LlamaIndex Python library is namespaced such that import statements which
include core imply that the core package is being used. In contrast, those
statements without core imply that an integration package is being used.
from llama_index.core.xxx import ClassABC
from llama_index.xxx.yyy import (
SubclassABC,
)
from llama_index.core.llms import LLM
from llama_index.llms.openai import OpenAI
LlamaParse (document agent platform)
LlamaParse is its own platformβfocused on document agents and agentic OCR. It includes Parse (parsing), LlamaAgents (deployed document agents), Extract (structured extraction), and Index (ingest and RAG). You can use it with the LlamaIndex framework or standalone.
- Sign up for LlamaParse β Create an account and get your API key.
- Parse β Agentic OCR and document parsing (130+ formats). Docs
- Extract β Structured data extraction from documents. Docs
- Index β Ingest, index, and RAG pipelines. Docs
- Split β Split large documents into subcategories. Docs
- Agents β Build end-to-end document agents with
Workflows and Agent Builder. Docs
Important Links
Documentation
X (formerly Twitter)
LinkedIn
Reddit
Discord
π Overview
NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!
Context
- LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
- How do we best augment LLMs with our own private data?
We need a comprehensive toolkit to help perform this data augmentation for LLMs.
Proposed Solution
That's where LlamaIndex comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:
- Offers data connectors to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
- Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs.
- Provides an advanced retrieval/query interface over your data: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, or anything else).
LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in
5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules),
to fit their needs.
π‘ Contributing
Interested in contributing? Contributions to LlamaIndex core as well as contributing
integrations that build on the core are both accepted and highly encouraged! See our Contribution Guide for more details.
New integrations should meaningfully integrate with existing LlamaIndex framework components. At the discretion of LlamaIndex maintainers, some integrations may be declined.
π Documentation
Full documentation can be found here
Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!
π» Example Usage
pip install llama-index-core
pip install llama-index-llms-openai
pip install llama-index-llms-ollama
pip install llama-index-embeddings-huggingface
Examples are in the docs/examples folder. Indices are in the indices folder (see list of indices below).
To build a simple vector store index using OpenAI:
import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)
To build a simple vector store index using non-OpenAI LLMs, e.g. LLMs hosted through Ollama:
from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.ollama import Ollama
from transformers import AutoTokenizer
Settings.llm = Ollama(
model="llama-3.1:latest",
request_timeout=360.0,
)
Settings.tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct"
)
Settings.embed_model = HuggingFaceEmbedding(
model_name="BAAI/bge-small-en-v1.5"
)
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(
documents,
)
To query:
query_engine = index.as_query_engine()
query_engine.query("YOUR_QUESTION")
By default, data is stored in-memory.
To persist to disk (under ./storage):
index.storage_context.persist()
To reload from disk:
from llama_index.core import StorageContext, load_index_from_storage
storage_context = StorageContext.from_defaults(persist_dir="./storage")
index = load_index_from_storage(storage_context)
A note on Verification of Build Assets
By default, llama-index-core includes a _static folder that contains the nltk and tiktoken cache that is included with the package installation. This ensures that you can easily run llama-index in environments with restrictive disk access permissions at runtime.
To verify that these files are safe and valid, we use the github attest-build-provenance action. This action will verify that the files in the _static folder are the same as the files in the llama-index-core/llama_index/core/_static folder.
To verify this, you can run the following script (pointing to your installed package):
#!/bin/bash
STATIC_DIR="venv/lib/python3.13/site-packages/llama_index/core/_static"
REPO="run-llama/llama_index"
find "$STATIC_DIR" -type f | while read -r file; do
echo "Verifying: $file"
gh attestation verify "$file" -R "$REPO" || echo "Failed to verify: $file"
done
π Citation
Reference to cite if you use LlamaIndex in a paper:
@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/llama_index},
year = {2022}
}