
Security News
Deno 2.6 + Socket: Supply Chain Defense In Your CLI
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.
langgraph-checkpoint-redis
Advanced tools
Redis implementation of the LangGraph agent checkpoint saver and store.
This repository contains Redis implementations for LangGraph, providing both Checkpoint Savers and Stores functionality.
The project consists of two main components:
The project requires the following main Python dependencies:
redis>=5.2.1redisvl>=0.5.1langgraph-checkpoint>=2.0.24IMPORTANT: This library requires Redis with the following modules:
If you're using Redis 8.0 or higher, both RedisJSON and RediSearch modules are included by default as part of the core Redis distribution. No additional installation is required.
If you're using a Redis version lower than 8.0, you'll need to ensure these modules are installed:
Failure to have these modules available will result in errors during index creation and checkpoint operations.
If you're using Azure Cache for Redis (especially Enterprise tier) or Redis Enterprise, there are important configuration considerations:
Azure Cache for Redis and Redis Enterprise use a proxy layer that makes the cluster appear as a single endpoint. This requires using a standard Redis client, not a cluster-aware client:
from redis import Redis
from langgraph.checkpoint.redis import RedisSaver
# ✅ CORRECT: Use standard Redis client for Azure/Enterprise
client = Redis(
host="your-cache.redis.cache.windows.net", # or your Redis Enterprise endpoint
port=6379, # or 10000 for Azure Enterprise with TLS
password="your-access-key",
ssl=True, # Azure/Enterprise typically requires SSL
ssl_cert_reqs="required", # or "none" for self-signed certs
decode_responses=False # RedisSaver expects bytes
)
# Pass the configured client to RedisSaver
saver = RedisSaver(redis_client=client)
saver.setup()
# ❌ WRONG: Don't use RedisCluster client with Azure/Enterprise
# from redis.cluster import RedisCluster
# cluster_client = RedisCluster(...) # This will fail with proxy-based deployments
For Azure Cache for Redis Enterprise tier:
10000 for Enterprise tier with TLS, or 6379 for standardExample for Azure Cache for Redis Enterprise:
client = Redis(
host="your-cache.redisenterprise.cache.azure.net",
port=10000, # Enterprise TLS port
password="your-access-key",
ssl=True,
ssl_cert_reqs="required",
decode_responses=False
)
Install the library using pip:
pip install langgraph-checkpoint-redis
[!IMPORTANT] When using Redis checkpointers for the first time, make sure to call
.setup()method on them to create required indices. See examples below.
from langgraph.checkpoint.redis import RedisSaver
write_config = {"configurable": {"thread_id": "1", "checkpoint_ns": ""}}
read_config = {"configurable": {"thread_id": "1"}}
with RedisSaver.from_conn_string("redis://localhost:6379") as checkpointer:
# Call setup to initialize indices
checkpointer.setup()
checkpoint = {
"v": 1,
"ts": "2024-07-31T20:14:19.804150+00:00",
"id": "1ef4f797-8335-6428-8001-8a1503f9b875",
"channel_values": {
"my_key": "meow",
"node": "node"
},
"channel_versions": {
"__start__": 2,
"my_key": 3,
"start:node": 3,
"node": 3
},
"versions_seen": {
"__input__": {},
"__start__": {
"__start__": 1
},
"node": {
"start:node": 2
}
},
"pending_sends": [],
}
# Store checkpoint
checkpointer.put(write_config, checkpoint, {}, {})
# Retrieve checkpoint
loaded_checkpoint = checkpointer.get(read_config)
# List all checkpoints
checkpoints = list(checkpointer.list(read_config))
from langgraph.checkpoint.redis.aio import AsyncRedisSaver
async def main():
write_config = {"configurable": {"thread_id": "1", "checkpoint_ns": ""}}
read_config = {"configurable": {"thread_id": "1"}}
async with AsyncRedisSaver.from_conn_string("redis://localhost:6379") as checkpointer:
# Call setup to initialize indices
await checkpointer.asetup()
checkpoint = {
"v": 1,
"ts": "2024-07-31T20:14:19.804150+00:00",
"id": "1ef4f797-8335-6428-8001-8a1503f9b875",
"channel_values": {
"my_key": "meow",
"node": "node"
},
"channel_versions": {
"__start__": 2,
"my_key": 3,
"start:node": 3,
"node": 3
},
"versions_seen": {
"__input__": {},
"__start__": {
"__start__": 1
},
"node": {
"start:node": 2
}
},
"pending_sends": [],
}
# Store checkpoint
await checkpointer.aput(write_config, checkpoint, {}, {})
# Retrieve checkpoint
loaded_checkpoint = await checkpointer.aget(read_config)
# List all checkpoints
checkpoints = [c async for c in checkpointer.alist(read_config)]
# Run the async main function
import asyncio
asyncio.run(main())
Shallow Redis checkpoint savers store only the latest checkpoint in Redis. These implementations are useful when retaining a complete checkpoint history is unnecessary.
from langgraph.checkpoint.redis.shallow import ShallowRedisSaver
# For async version: from langgraph.checkpoint.redis.ashallow import AsyncShallowRedisSaver
write_config = {"configurable": {"thread_id": "1", "checkpoint_ns": ""}}
read_config = {"configurable": {"thread_id": "1"}}
with ShallowRedisSaver.from_conn_string("redis://localhost:6379") as checkpointer:
checkpointer.setup()
# ... rest of the implementation follows similar pattern
Both Redis checkpoint savers and stores support automatic expiration using Redis TTL:
# Configure automatic expiration
ttl_config = {
"default_ttl": 60, # Expire checkpoints after 60 minutes
"refresh_on_read": True, # Reset expiration time when reading checkpoints
}
with RedisSaver.from_conn_string("redis://localhost:6379", ttl=ttl_config) as saver:
saver.setup()
# Checkpoints will expire after 60 minutes of inactivity
When no TTL is configured, checkpoints are persistent (never expire automatically).
You can make specific checkpoints persistent by removing their TTL. This is useful for "pinning" important threads that should never expire:
from langgraph.checkpoint.redis import RedisSaver
# Create saver with default TTL
saver = RedisSaver.from_conn_string("redis://localhost:6379", ttl={"default_ttl": 60})
saver.setup()
# Save a checkpoint
config = {"configurable": {"thread_id": "important-thread", "checkpoint_ns": ""}}
saved_config = saver.put(config, checkpoint, metadata, {})
# Remove TTL from the checkpoint to make it persistent
checkpoint_id = saved_config["configurable"]["checkpoint_id"]
checkpoint_key = f"checkpoint:important-thread:__empty__:{checkpoint_id}"
saver._apply_ttl_to_keys(checkpoint_key, ttl_minutes=-1)
# The checkpoint is now persistent and won't expire
When no TTL configuration is provided, checkpoints are persistent by default (no expiration).
This makes it easy to manage storage and ensure ephemeral data is automatically cleaned up while keeping important data persistent.
Redis Stores provide a persistent key-value store with optional vector search capabilities.
from langgraph.store.redis import RedisStore
# Basic usage
with RedisStore.from_conn_string("redis://localhost:6379") as store:
store.setup()
# Use the store...
# With vector search configuration
index_config = {
"dims": 1536, # Vector dimensions
"distance_type": "cosine", # Distance metric
"fields": ["text"], # Fields to index
}
# With TTL configuration
ttl_config = {
"default_ttl": 60, # Default TTL in minutes
"refresh_on_read": True, # Refresh TTL when store entries are read
}
with RedisStore.from_conn_string(
"redis://localhost:6379",
index=index_config,
ttl=ttl_config
) as store:
store.setup()
# Use the store with vector search and TTL capabilities...
from langgraph.store.redis.aio import AsyncRedisStore
async def main():
# TTL also works with async implementations
ttl_config = {
"default_ttl": 60, # Default TTL in minutes
"refresh_on_read": True, # Refresh TTL when store entries are read
}
async with AsyncRedisStore.from_conn_string(
"redis://localhost:6379",
ttl=ttl_config
) as store:
await store.setup()
# Use the store asynchronously...
asyncio.run(main())
The examples directory contains Jupyter notebooks demonstrating the usage of Redis with LangGraph:
persistence_redis.ipynb: Demonstrates the usage of Redis checkpoint savers with LangGraphcreate-react-agent-memory.ipynb: Shows how to create an agent with persistent memory using Rediscross-thread-persistence.ipynb: Demonstrates cross-thread persistence capabilitiespersistence-functional.ipynb: Shows functional persistence patterns with RedisTo run the example notebooks with Docker:
Navigate to the examples directory:
cd examples
Start the Docker containers:
docker compose up
Open the URL shown in the console (typically http://127.0.0.1:8888/tree) in your browser to access Jupyter.
When finished, stop the containers:
docker compose down
This implementation relies on specific Redis modules:
The Redis implementation creates these main indices using RediSearch:
For Redis Stores with vector search:
Both Redis checkpoint savers and stores leverage Redis's native key expiration:
EXPIRE command for setting TTLPERSIST command to remove TTL (with ttl_minutes=-1)We welcome contributions! Here's how you can help:
Clone the repository:
git clone https://github.com/redis-developer/langgraph-redis
cd langgraph-redis
Install dependencies:
`poetry install --all-extras`
The project includes several make commands for development:
Testing:
make test # Run all tests
make test-all # Run all tests including API tests
Linting and Formatting:
make format # Format all files with Black and isort
make lint # Run formatting, type checking, and other linters
make check-types # Run mypy type checking
Code Quality:
make test-coverage # Run tests with coverage reporting
make coverage-report # Generate coverage report without running tests
make coverage-html # Generate HTML coverage report (opens in htmlcov/)
make find-dead-code # Find unused code with vulture
Redis for Development/Testing:
make redis-start # Start Redis Stack in Docker (includes RedisJSON and RediSearch modules)
make redis-stop # Stop Redis container
make testmake formatmake lintThis project is licensed under the MIT License.
FAQs
Redis implementation of the LangGraph agent checkpoint saver and store.
We found that langgraph-checkpoint-redis demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.

Security News
New DoS and source code exposure bugs in React Server Components and Next.js: what’s affected and how to update safely.

Security News
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.