
Security News
Deno 2.6 + Socket: Supply Chain Defense In Your CLI
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.
langchain-redis
Advanced tools
This package contains the LangChain integration with Redis, providing powerful tools for vector storage, semantic caching, and chat history management.
pip install -U langchain-redis
This will install the package along with its dependencies, including redis, redisvl, and ulid.
To use this package, you need to have a Redis instance running. You can configure the connection by setting the following environment variable:
export REDIS_URL="redis://username:password@localhost:6379"
Alternatively, you can pass the Redis URL directly when initializing the components or use the RedisConfig class for more detailed configuration.
This package supports various Redis deployment modes through different connection URL schemes:
# Standard Redis
redis_url = "redis://localhost:6379"
# Redis with authentication
redis_url = "redis://username:password@localhost:6379"
# Redis SSL/TLS
redis_url = "rediss://localhost:6380"
Redis Sentinel provides high availability for Redis. You can connect to a Sentinel-managed Redis deployment using the redis+sentinel:// URL scheme:
# Single Sentinel node
redis_url = "redis+sentinel://sentinel-host:26379/mymaster"
# Multiple Sentinel nodes (recommended for high availability)
redis_url = "redis+sentinel://sentinel1:26379,sentinel2:26379,sentinel3:26379/mymaster"
# Sentinel with authentication
redis_url = "redis+sentinel://username:password@sentinel1:26379,sentinel2:26379/mymaster"
The Sentinel URL format is: redis+sentinel://[username:password@]host1:port1[,host2:port2,...]/service_name
Where:
host:port - One or more Sentinel node addressesservice_name - The name of the Redis master service (e.g., "mymaster")Example using Sentinel with RedisVectorStore:
from langchain_redis import RedisVectorStore, RedisConfig
from langchain_openai import OpenAIEmbeddings
config = RedisConfig(
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
index_name="my_index"
)
vector_store = RedisVectorStore(
embeddings=OpenAIEmbeddings(),
config=config
)
Example using Sentinel with RedisCache:
from langchain_redis import RedisCache
cache = RedisCache(
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster",
ttl=3600
)
Example using Sentinel with RedisChatMessageHistory:
from langchain_redis import RedisChatMessageHistory
history = RedisChatMessageHistory(
session_id="user_123",
redis_url="redis+sentinel://sentinel1:26379,sentinel2:26379/mymaster"
)
The RedisVectorStore class provides a vector database implementation using Redis.
from langchain_redis import RedisVectorStore, RedisConfig
from langchain_core.embeddings import Embeddings
embeddings = Embeddings() # Your preferred embedding model
config = RedisConfig(
index_name="my_vectors",
redis_url="redis://localhost:6379",
distance_metric="COSINE" # Options: COSINE, L2, IP
)
vector_store = RedisVectorStore(embeddings, config=config)
# Adding documents
texts = ["Document 1 content", "Document 2 content"]
metadatas = [{"source": "file1"}, {"source": "file2"}]
vector_store.add_texts(texts, metadatas=metadatas)
# Adding documents with custom keys
custom_keys = ["doc1", "doc2"]
vector_store.add_texts(texts, metadatas=metadatas, keys=custom_keys)
# Similarity search
query = "Sample query"
docs = vector_store.similarity_search(query, k=2)
# Similarity search with score
docs_and_scores = vector_store.similarity_search_with_score(query, k=2)
# Similarity search with filtering
filter_expr = Tag("category") == "science"
filtered_docs = vector_store.similarity_search(query, k=2, filter=filter_expr)
# Maximum marginal relevance search
docs = vector_store.max_marginal_relevance_search(query, k=2, fetch_k=10)
The RedisCache, RedisSemanticCache, and LangCacheSemanticCache classes provide caching mechanisms for LLM calls.
from langchain_redis import RedisCache, RedisSemanticCache, LangCacheSemanticCache
from langchain_core.language_models import LLM
from langchain_core.embeddings import Embeddings
# Standard cache
cache = RedisCache(redis_url="redis://localhost:6379", ttl=3600)
# Semantic cache
embeddings = Embeddings() # Your preferred embedding model
semantic_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=embeddings,
distance_threshold=0.1
)
# LangChain cache - manages embeddings for you
langchain_cache = LangCacheSemanticCache(
cache_id="your-cache-id",
api_key="your-api-key",
distance_threshold=0.1
)
# Using cache with an LLM
llm = LLM(cache=cache) # or LLM(cache=semantic_cache) or LLM(cache=langchain_cache)
# Async cache operations
await cache.aupdate("prompt", "llm_string", [Generation(text="cached_response")])
cached_result = await cache.alookup("prompt", "llm_string")
The RedisChatMessageHistory class provides a Redis-based storage for chat message history with efficient search capabilities.
from langchain_redis import RedisChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
# Initialize with optional TTL (time-to-live) in seconds
history = RedisChatMessageHistory(
session_id="user_123",
redis_url="redis://localhost:6379",
ttl=3600, # Messages will expire after 1 hour
)
# Adding messages
history.add_message(HumanMessage(content="Hello, AI!"))
history.add_message(AIMessage(content="Hello, human! How can I assist you today?"))
history.add_message(SystemMessage(content="This is a system message"))
# Retrieving all messages in chronological order
messages = history.messages
# Searching messages with full-text search
results = history.search_messages("assist", limit=5) # Returns matching messages
# Get message count
message_count = len(history)
# Clear history for current session
history.clear()
# Delete all sessions and index (use with caution)
history.delete()
The RedisConfig class allows for detailed configuration of the Redis integration:
from langchain_redis import RedisConfig
config = RedisConfig(
index_name="my_index",
redis_url="redis://localhost:6379",
distance_metric="COSINE",
key_prefix="my_prefix",
vector_datatype="FLOAT32",
storage_type="hash",
metadata_schema=[
{"name": "category", "type": "tag"},
{"name": "price", "type": "numeric"}
]
)
Refer to the inline documentation for detailed information on these configuration options.
The package uses Python's standard logging module. You can configure logging to get more information about the package's operations:
import logging
logging.basicConfig(level=logging.INFO)
Error handling is done through custom exceptions. Make sure to handle these exceptions in your application code.
k and fetch_k parameters in similarity searches to balance between accuracy and performance.For more detailed examples and use cases, please refer to the docs/ directory in this repository.
The library is rooted at libs/redis, for all the commands below, CD to libs/redis:
To install dependencies for unit tests:
poetry install --with test
To run unit tests:
make test
To run a specific test:
TEST_FILE=tests/unit_tests/test_imports.py make test
You would need an OpenAI API Key to run the integration tests:
export OPENAI_API_KEY=sk-J3nnYJ3nnYWh0Can1Turnt0Ug1VeMe50mth1n1cAnH0ld0n2
To install dependencies for integration tests:
poetry install --with test,test_integration
To run integration tests:
make integration_tests
Install langchain-redis development requirements (for running langchain, running examples, linting, formatting, tests, and coverage):
poetry install --with lint,typing,test,test_integration
Then verify dependency installation:
make lint
This project is licensed under the MIT License (LICENSE).
FAQs
An integration package connecting Redis and LangChain for AI working memory
We found that langchain-redis demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.

Security News
New DoS and source code exposure bugs in React Server Components and Next.js: what’s affected and how to update safely.

Security News
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.