Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
As of redis-py 4.0.0 this library is deprecated. It's features have been merged into redis-py. Please either install it from pypy or the repo.
This is a Python search engine library that utilizes the RediSearch Redis Module API.
It is the "official" client of RediSearch, and should be regarded as its canonical client implementation.
RediSearch is a source avaliable (RSAL), high performance search engine implemented as a Redis Module. It uses custom data types to allow fast, stable and feature rich full-text search inside Redis.
This client is a wrapper around the RediSearch API protocol, that allows you to utilize its features easily.
For more details, visit http://redisearch.io
When you create a redisearch-py client instance, the only required argument is the name of the index.
from redisearch import Client
client = Client("my-index")
To connect with a username and/or password, pass those options to the client initializer.
client = Client("my-index", password="my-password")
Every instance of Client
contains an instance of the redis-py Client
as
well. Use this object to run core Redis commands.
import datetime
from redisearch import Client
START_TIME = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M.%S")
client = Client("my-index")
client.redis.set("start-time", START_TIME)
To check if a RediSearch index exists, use the FT.INFO
command and catch
the ResponseError
raised if the index does not exist.
from redis import ResponseError
from redisearch import Client
client = Client("my-index")
try:
client.info()
except ResponseError:
# Index does not exist. We need to create it!
Use an instance of IndexDefinition
to define a search index. You only need
to do this when you create an index.
RediSearch indexes follow Hashes in your Redis databases by watching key
prefixes. If a Hash whose key starts with one of the search index's
configured key prefixes is added, updated, or deleted from Redis, RediSearch
will make those changes in the index. You configure a search index's key
prefixes using the prefix
parameter of the IndexDefinition
initializer.
NOTE: Once you create an index, RediSearch will continuously index these keys when their Hashes change.
IndexDefinition
also takes a schema. The schema specifies which fields to
index from within the Hashes that the index follows. The field types are:
For more information on what these field types mean, consult the RediSearch
documentation on
the FT.CREATE
command.
With redisearch-py, the schema is an iterable of Field
instances. Once you
have an IndexDefinition
instance, you can create the instance by passing a
schema iterable to the create_index()
method.
from redis import ResponseError
from redisearch import Client, IndexDefinition, TextField
SCHEMA = (
TextField("title", weight=5.0),
TextField("body")
)
client = Client("my-index")
definition = IndexDefinition(prefix=['blog:'])
try:
client.info()
except ResponseError:
# Index does not exist. We need to create it!
client.create_index(SCHEMA, definition=definition)
A RediSearch 2.0 index continually follows Hashes with the key prefixes you defined, so if you want to add a document to the index, you only need to create a Hash with one of those prefixes.
# Indexing a document with RediSearch 2.0.
doc = {
'title': 'RediSearch',
'body': 'Redisearch adds querying, indexing, and full-text search to Redis'
}
client.redis.hset('doc:1', mapping=doc)
Past versions of RediSearch required that you call the add_document()
method. This method is deprecated, but we include its usage here for
reference.
# Indexing a document for RediSearch 1.x
client.add_document(
"doc:2",
title="RediSearch",
body="Redisearch implements a search engine on top of redis",
)
Use the search()
method to perform basic full-text and field-specific
searches. This method doesn't take many of the options available to the
RediSearch FT.SEARCH
command -- read the section on building complex
queries later in this document for information on how to use those.
res = client.search("evil wizards")
Results are wrapped in a Result
object that includes the number of results
and a list of matching documents.
>>> print(res.total)
2
>>> print(res.docs[0].title)
"Wizard Story 2: Evil Wizards Strike Back"
You can use the Query
object to build complex queries:
q = Query("evil wizards").verbatim().no_content().with_scores().paging(0, 5)
res = client.search(q)
For an explanation of these options, see the RediSearch
documentation for
the FT.SEARCH
command.
The default behavior of queries is to run a full-text search across all
TEXT
fields in the index for the intersection of all terms in the query.
So the example given in the "Basic queries" section of this README,
client.search("evil wizards")
, run a full-text search for the intersection
of "evil" and "wizard" in all TEXT
fields.
Many more types of queries are possible, however! The string you pass into
the search()
method or Query()
initializer has the full range of query
syntax available in RediSearch.
For example, a full-text search against a specific TEXT
field in the index
looks like this:
# Full-text search
res = client.search("@title:evil wizards")
Finding books published in 2020 or 2021 looks like this:
client.search("@published_year:[2020 2021]")
To learn more, see the RediSearch documentation on query syntax.
This library contains a programmatic interface to run aggregation queries with RediSearch.
To make an aggregation query, pass an instance of the AggregateRequest
class to the search()
method of an instance of Client
.
For example, here is what finding the most books published in a single year looks like:
from redisearch import Client
from redisearch import reducers
from redisearch.aggregation import AggregateRequest
client = Client('books-idx')
request = AggregateRequest('*').group_by(
'@published_year', reducers.count().alias("num_published")
).group_by(
[], reducers.max("@num_published").alias("max_books_published_per_year")
)
result = client.aggregate(request)
The aggregation query just given is equivalent to the following
FT.AGGREGATE
command entered directly into the redis-cli:
FT.AGGREGATE books-idx *
GROUPBY 1 @published_year
REDUCE COUNT 0 AS num_published
GROUPBY 0
REDUCE MAX 1 @num_published AS max_books_published_per_year
Aggregation queries return an AggregateResult
object that contains the rows
returned for the query and a cursor if you're using the cursor
API.
from redisearch.aggregation import AggregateRequest, Asc
request = AggregateRequest('*').group_by(
['@published_year'], reducers.avg('average_rating').alias('average_rating_for_year')
).sort_by(
Asc('@average_rating_for_year')
).limit(
0, 10
).filter('@published_year > 0')
...
In [53]: resp = c.aggregate(request)
In [54]: resp.rows
Out[54]:
[['published_year', '1914', 'average_rating_for_year', '0'],
['published_year', '2009', 'average_rating_for_year', '1.39166666667'],
['published_year', '2011', 'average_rating_for_year', '2.046'],
['published_year', '2010', 'average_rating_for_year', '3.125'],
['published_year', '2012', 'average_rating_for_year', '3.41'],
['published_year', '1967', 'average_rating_for_year', '3.603'],
['published_year', '1970', 'average_rating_for_year', '3.71875'],
['published_year', '1966', 'average_rating_for_year', '3.72666666667'],
['published_year', '1927', 'average_rating_for_year', '3.77']]
Notice from the example that we used an object from the reducers
module.
See the RediSearch documentation
for more examples of reducer functions you can use when grouping results.
Reducer functions include an alias()
method that gives the result of the
reducer a specific name. If you don't supply a name, RediSearch will generate
one.
The group_by
statement can take a single field name as a string, or multiple
field names as a list of strings.
AggregateRequest('*').group_by('@published_year', reducers.count())
AggregateRequest('*').group_by(
['@published_year', '@average_rating'],
reducers.count())
To run a reducer function on every result from an aggregation query, pass an
empty list to group_by()
, which is equivalent to passing the option
GROUPBY 0
when writing an aggregation in the redis-cli.
AggregateRequest('*').group_by([], reducers.max("@num_published"))
NOTE: Aggregation queries require at least one group_by()
method call.
Using an AggregateRequest
instance, you can sort with the sort_by()
method
and limit with the limit()
method.
For example, finding the average rating of books published each year, sorting by the average rating for the year, and returning only the first ten results:
from redisearch import Client
from redisearch.aggregation import AggregateRequest, Asc
c = Client()
request = AggregateRequest('*').group_by(
['@published_year'], reducers.avg('average_rating').alias('average_rating_for_year')
).sort_by(
Asc('@average_rating_for_year')
).limit(0, 10)
c.aggregate(request)
NOTE: The first option to limit()
is a zero-based offset, and the second
option is the number of results to return.
Use filtering to reject results of an aggregation query after your reducer functions run. For example, calculating the average rating of books published each year and only returning years with an average rating higher than 3:
from redisearch.aggregation import AggregateRequest, Asc
req = AggregateRequest('*').group_by(
['@published_year'], reducers.avg('average_rating').alias('average_rating_for_year')
).sort_by(
Asc('@average_rating_for_year')
).filter('@average_rating_for_year > 3')
$ pip install redisearch
virtualenv -v venv
pip install --user poetry
poetry install
Note: Due to an interaction between and python 3.10, you may need to run the following, if you receive a JSONError while installing packages.
poetry config experimental.new-installer false
Testing can easily be performed using using Docker. Run the following:
make -C test/docker test PYTHON_VER=3
(Replace PYTHON_VER=3
with PYTHON_VER=2
to test with Python 2.7.)
Alternatively, use the following procedure:
First, run:
PYTHON_VER=3 ./test/test-setup.sh
This will set up a Python virtual environment in venv3
(or in venv2
if PYTHON_VER=2
is used).
Afterwards, run RediSearch in a container as a daemon:
docker run -d -p 6379:6379 redislabs/redisearch:2.0.0
Finally, invoke the virtual environment and run the tests:
. ./venv3/bin/activate
REDIS_PORT=6379 python test/test.py
REDIS_PORT=6379 python test/test_builder.py
FAQs
RedisSearch Python Client
We found that redisearch demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.