langchain-redis
This package contains the LangChain integration with Redis, providing powerful tools for vector storage, semantic caching, and chat history management.
Installation
pip install -U langchain-redis
This will install the package along with its dependencies, including redis, redisvl, and ulid.
Configuration
To use this package, you need to have a Redis instance running. You can configure the connection by setting the following environment variable:
export REDIS_URL="redis://username:password@localhost:6379"
Alternatively, you can pass the Redis URL directly when initializing the components or use the RedisConfig class for more detailed configuration.
Redis Connection Options
This package supports various Redis deployment modes through different connection URL schemes:
Standard Redis Connection
redis_url = "redis://localhost:6379"
redis_url = "redis://username:password@localhost:6379"
redis_url = "rediss://localhost:6380"
Redis Sentinel Connection
Redis Sentinel provides high availability for Redis. You can connect to a Sentinel-managed Redis deployment using the redis+sentinel:// URL scheme:
redis_url = "redis+sentinel://sentinel-host:26379/mymaster"
redis_url = "redis+sentinel://sentinel1:26379,sentinel2:26379,sentinel3:26379/mymaster"
redis_url = "redis+sentinel://username:password@sentinel1:26379,sentinel2:26379/mymaster"
The Sentinel URL format is: redis+sentinel://[username:password@]host1:port1[,host2:port2,...]/service_name
Where:
host:port - One or more Sentinel node addresses
service_name - The name of the Redis master service (e.g., "mymaster")
Example using Sentinel with RedisVectorStore:
from langchain_redis import RedisVectorStore, RedisConfig
from langchain_openai import OpenAIEmbeddings
config = RedisConfig(
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
index_name="my_index"
)
vector_store = RedisVectorStore(
embeddings=OpenAIEmbeddings(),
config=config
)
Example using Sentinel with RedisCache:
from langchain_redis import RedisCache
cache = RedisCache(
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
ttl=3600
)
Example using Sentinel with RedisChatMessageHistory:
from langchain_redis import RedisChatMessageHistory
history = RedisChatMessageHistory(
session_id="user_123",
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster"
)
Features
1. Vector Store
The RedisVectorStore class provides a vector database implementation using Redis.
Usage
from langchain_redis import RedisVectorStore, RedisConfig
from langchain_core.embeddings import Embeddings
embeddings = Embeddings()
config = RedisConfig(
index_name="my_vectors",
redis_url="redis://localhost:6379",
distance_metric="COSINE"
)
vector_store = RedisVectorStore(embeddings, config=config)
texts = ["Document 1 content", "Document 2 content"]
metadatas = [{"source": "file1"}, {"source": "file2"}]
vector_store.add_texts(texts, metadatas=metadatas)
custom_keys = ["doc1", "doc2"]
vector_store.add_texts(texts, metadatas=metadatas, keys=custom_keys)
query = "Sample query"
docs = vector_store.similarity_search(query, k=2)
docs_and_scores = vector_store.similarity_search_with_score(query, k=2)
filter_expr = Tag("category") == "science"
filtered_docs = vector_store.similarity_search(query, k=2, filter=filter_expr)
docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)
Features
- Efficient vector storage and retrieval
- Support for metadata filtering
- Multiple distance metrics: Cosine similarity, L2, and Inner Product
- Maximum marginal relevance search
- Custom key support for document indexing
2. Cache
The RedisCache, RedisSemanticCache, and LangCacheSemanticCache classes provide caching mechanisms for LLM calls.
Usage
from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
from langchain_core.language_models import LLM
from langchain_core.embeddings import Embeddings
cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)
embeddings = Embeddings()
semantic_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=embeddings,
distance_threshold=0.1
)
langchain_cache = LangCacheSemanticCache(
cache_id="your-cache-id",
api_key="your-api-key",
distance_threshold=0.1
)
llm = LLM(cache=cache)
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
cached_result = await cache.alookup("prompt", "llm_string")
Features
- Efficient caching of LLM responses
- TTL support for automatic cache expiration
- Semantic caching for similarity-based retrieval
- Asynchronous cache operations
What is Redis LangCache?
- LangCache is a fully managed, cloud-based service that provides a semantic cache for LLM applications.
- It manages embeddings and vector search for you, allowing you to focus on your application logic.
- See our docs to learn more, or try LangCache on Redis Cloud today.
3. Chat History
The RedisChatMessageHistory class provides a Redis-based storage for chat message history with efficient search capabilities.
Usage
from langchain_redis import RedisChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
history = RedisChatMessageHistory(
session_id="user_123",
redis_url="redis://localhost:6379",
ttl=3600,
)
history.add_message(HumanMessage(content="Hello, AI!"))
history.add_message(AIMessage(content="Hello, human! How can I assist you today?"))
history.add_message(SystemMessage(content="This is a system message"))
messages = history.messages
results = history.search_messages("assist", limit=5)
message_count = len(history)
history.clear()
history.delete()
Features
- Fast storage of chat messages with automatic expiration (TTL)
- Support for different message types (Human, AI, System)
- Full-text search capabilities across message content
- Chronological message retrieval
- Session-based message organization
- Customizable key prefixing
- Thread-safe operations
- Efficient RedisVL-based indexing and querying
Advanced Configuration
The RedisConfig class allows for detailed configuration of the Redis integration:
from langchain_redis import RedisConfig
config = RedisConfig(
index_name="my_index",
redis_url="redis://localhost:6379",
distance_metric="COSINE",
key_prefix="my_prefix",
vector_datatype="FLOAT32",
storage_type="hash",
metadata_schema=[
{"name": "category", "type": "tag"},
{"name": "price", "type": "numeric"}
]
)
Refer to the inline documentation for detailed information on these configuration options.
Error Handling and Logging
The package uses Python's standard logging module. You can configure logging to get more information about the package's operations:
import logging
logging.basicConfig(level=logging.INFO)
Error handling is done through custom exceptions. Make sure to handle these exceptions in your application code.
Performance Considerations
- For large datasets, consider using batched operations when adding documents to the vector store.
- Adjust the
k and fetch_k parameters in similarity searches to balance between accuracy and performance.
- Use appropriate indexing algorithms (FLAT, HNSW) based on your dataset size and query requirements.
Examples
For more detailed examples and use cases, please refer to the docs/ directory in this repository.
Contributing / Development
The library is rooted at libs/redis, for all the commands below, CD to libs/redis:
Unit Tests
To install dependencies for unit tests:
poetry install --with test
To run unit tests:
make test
To run a specific test:
TEST_FILE=tests/unit_tests/test_imports.py make test
Integration Tests
You would need an OpenAI API Key to run the integration tests:
export OPENAI_API_KEY=sk-J3nnYJ3nnYWh0Can1Turnt0Ug1VeMe50mth1n1cAnH0ld0n2
To install dependencies for integration tests:
poetry install --with test,test_integration
To run integration tests:
make integration_tests
Local Development
Install langchain-redis development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
poetry install --with lint,typing,test,test_integration
Then verify dependency installation:
make lint
License
This project is licensed under the MIT License (LICENSE).