Nekdis
What is it?
Nekdis is the temporary name for a proposal for redis-om that aims to improve the user experience and performance by providing an ODM-like naming scheme like the famous library mongoose for MongoDB
Future Plans
Right now the proposal includes almost every feature that redis-om already has (See: Missing Features) and introduces some like References.
The next steps for the proposal include:
- Improve performance on parsing nested objects for hashes1
- Improving auto fetch performance by including a lua script that will get injected as a redis function.
- Allow auto references to be updated.
- Improve reference checking
- Adding support for objects inside arrays.
- Make a proposal for
node-redis
to improve its performance.
Table of contents
Installation
Nekdis is available on npm via the command
npm i nekdis
Getting Started
Connecting to the database
Nekdis already exports a global client but you can also create your own instance with the Client
class.
import { client } from "nekdis";
client.connect().then(() => {
console.log("Connected to redis");
});
Creating an instance
import { Client } from "nekdis";
const client = new Client();
client.connect().then(() => {
console.log("Connected to redis");
});
Creating a Schema
The client provides a helper to build a schema without any extra steps.
import { client } from "nekdis";
const catSchema = client.schema({
name: { type: "string" }
});
Creating a Model
The client also provides a helper to create a model.
import { client } from "nekdis";
const catModel = client.model("Cat", catSchema);
Creating and Saving data
The model is what provides all the functions to manage your data on the database.
const aCat = catModel.createAndSave({
name: "Nozomi"
});
The new RecordId
This proposal introduces a new way to create unique ids called RecordId
.
RecordIds allow you to set prefixes and other properties to your id that is shared across all of the records.

Vector Similarity Search
There are 3 types of vss queries as said in the documentation
Lets use the following schema & model for the next examples
import { client } from "nekdis";
const testSchema = client.schema({
age: "number",
vec: "vector"
})
const testModel = client.model("Test", testSchema);
A note on the schema. Passing the string "vector"
will default to the following options:
const vectorDefaults = {
ALGORITHM: "FLAT",
TYPE: "FLOAT32",
DIM: 128,
DISTANCE_METRIC: "L2",
}
Pure queries
testModel.search().where("vec").eq((vector) => vector
.knn()
.from([2, 5, 7])
.return(8))
.returnAll();
Hybrid queries
testModel.search().where("age").between(18, 30)
.and("vec").eq((vector) => vector
.knn()
.from([2, 5, 7])
.return(8))
.returnAll();
Range queries
testModel.search().where("vec").eq((vector) => vector
.range(5)
.from([2, 5, 7]))
.returnAll();
Custom Methods
In this proposal you can create your own custom methods that will be added to the Model
, this methods are defined on the schema directly.
WARNING: Anonymous functions cannot be used when defining custom methods/functions
const albumSchema = client.schema({
artist: { type: "string", required: true },
name: { type: "text", required: true },
year: "number"
}, {
searchByName: async function (name: string) {
return await this.search().where("name").matches(name).returnAll();
}
})
const albumModel = client.model("Album", albumSchema);
const results = await albumModel.searchByName("DROP");
Modules
Nekdis allows you to add modules to the client, modules are something that adds extra functionality to the library, you pass in a class where the constructor will receive the client as its first argument.
Keep in mind that this might be more useful if you are creating your own instance of the client and exporting it because that way you will also get intellisense for the module.
import {type Client, client} from "nekdis";
class MyModule {
constructor(client: Client) {
}
myFunction() {
}
}
client.withModules({ name: "myMod", ctor: MyModule });
client.myMod.myFunction()
Schema Types
This proposal adds some new data types and removes the string[]
& number[]
types.
string | A standard string that will be indexed as TAG |
number | A standard float64 number that will be indexed as NUMERIC |
bigint | A javascript BigInt that will be indexed as TAG |
boolean | A standard boolean that will be indexed as TAG |
text | A standard string that will be indexed as TEXT which allows for full text search |
date | This field will internally be indexed as NUMERIC , it gets saved as a Unix Epoch but you will be able to interact with it normally as it will be a Date when you access it |
point | This is an object containing a latitude and longitude and will be indexed as GEO |
array | Internally it will be indexed as the type given to the elements property which defaults to string |
object | This type allows you to nest forever using the properties property in the schema and what gets indexed are its properties, if none are given it will not be indexed not checked |
reference | When using this type you will be given a ReferenceArray which is a normal array with a reference method that you can pass in another document or a record id to it, references can be auto fetched but auto fetched references cannot be changed |
tuple | Tuples will be presented as per-index type safe arrays but they are dealt with in a different way. They will be indexed as static props so you can search on a specific element only, this also affects the query builder instead of where(arrayName) it will be where(arrayName.idx.prop) but this has working intellisense just like all the other fields so it shouldn't be an issue |
vector | A vector field that is an array but indexed as VECTOR |
Field Properties
This proposal includes the addition of 2 new shared properties and some unique ones
Shared Properties
type | The type of the field |
optional | Defines whether the field is optional or not (this doesn't work if validation is disabled) |
default | Chose a default value for the field making so that it will always exist even if it isn't required |
index | Defines whether the field should be indexed or not (defaults to false ) |
sortable | Defines whether the field is sortable or not (note that this doesn't exist nor work on object fields & reference fields) |
Unique Properties
Vector properties wont be documented here, check the types instead
elements | array | Defines the type of the array |
elements | tuple | Even tho it has the same name this field is required in tuples and there are no ways to define infinite length tuples (just use normal arrays) |
separator | array | This defines the separator that will be used for arrays on hash fields |
properties | object | The properties the object contains, if this isn't defined the object wont be type checked nor indexed |
schema | reference | This is a required property when using references and it allows for intellisense to give the types on auto fetch and later on for certain type checking to also work as well |
literal | string | number | bigint | Make it so that the saved value has to be exactly one of the literal values |
caseSensitive | string | Defines whether the string is case sensitive or not |
phonetic | text | Choose the phonetic matcher the field will use |
weight | text | Declare the importance of the field |
Missing features
- Custom alias for a field2.
Todo
in
operator for number search
- Array of points
- Fully support array of objects
- Add
$id
alias3
Nekdis VS Redis-OM
In this part of the document im going to cover how this proposal compares to the current redis-om (0.4.2) and the major differences.
Client
In Nekdis the Client
does not provide any methods to interact directly with the database and its pretty much only used to store your models and handle the connection, however you can access the node-redis
client by accessing client.raw
.
Schema
The schema in Nekdis is just where you define the shape of your data while in redis-om it also takes care of creating indexes and some other internal bits.
With this comes the big question "Well, why not use just a plain object then", the simple answer to this question is ease of use but to explain it further, having the schema defined this way allows the library to internally check if there isn't anything missing and parse it so you are allowed to use the shorthand methods like field: "string"
, this approach also allows for you to define methods an options that will be passed down to the model down the road and not to mention that this is one of the only ways to have references working properly without affecting performance.
Model vs Repository
In redis-om you use a repository
to interact with the db by using methods like fetch
, save
and search
.
In Nekdis the model
is not that different but it allows you to add more functionality to it (see: Custom Methods) and overall gives more functionality out of the box.
Nekdis Document
In Nekdis you have what are called documents, this is just an abstraction to the data to allow better interaction with references and faster parsing.
At first this might look daunting compared to redis-om that now uses plain objects but i can assure you that there isn't that much of a difference, and i will give some examples to demonstrate it.
Creating and saving
See, its just as easy
Nekdis | Redis-OM |
---|
await model.createAndSave({
name: "DidaS"
});
|
await repository.save({
name: "DidaS"
});
|
Creating and mutating
This is where things start to be a bit different, even tho you can use a plain object that isn't recommended since it would just use more memory.
Nekdis | Nekdis with plain object | Redis-OM |
---|
const data = model.create({
name: "DidaS"
});
data.year = 2023;
await model.save(data);
|
const data = {
name: "DidaS"
}
data.year = 2023;
await model.createAndSave(data);
|
const data = {
name: "DidaS"
}
data.year = 2023;
await repository.save(data);
|
Search
Looking at search for the first time it is pretty much the same, the only difference is that equals
operations exist in every data type so a lot of times changing the data type in the schema wont break the query and the best part is that eq
, equals
and other operators like them support arrays (so they pretty much work like an in
operator).
Nested objects
Currently in redis-om you need to define a path for each field to define your nested objects, meanwhile in Nekdis they just work like normal js objects!
There are several advantages to this, two of the main ones being, faster serialization/deserialization and simpler to use, here is an example comparing both
Nekdis | Redis-OM |
---|
client.schema({
field: {
type: "object",
properties: {
aNumberInsideOfIt: "number",
nesting: {
type: "object",
properties: {
doubleNested: "boolean"
}
}
}
}
})
|
Schema("OM", {
aNumberInsideOfIt: {
type: "number",
path: "$.field.aNumberInsideOfIt"
},
doubleNested: {
type: "boolean",
path: "$.field.nesting.doubleNested"
}
})
|
A Simple example
This is a simple program example that generates 30 random users with random ages and fetches the ones matching a certain age just to show the differences between the libraries
Nekdis | Redis-OM |
---|
import { client } from "nekdis";
await client.connect();
const userSchema = client.schema({
age: "number"
}, {
findBetweenAge: async function (min: number, max: number) {
return await this.search().where("age").between(min, max).returnAll();
}
}, { suffix: () => Date.now().toString() });
const userModel = client.model("User", userSchema);
await userModel.createIndex();
for (let i = 0; i < 30; i++) {
await userModel.createAndSave({
age: between(18, 90)
});
}
const users = await userModel.findBetweenAge(30, 50);
console.log(users)
await client.disconnect();
function between(min: number, max: number) {
return Math.round(Math.random() * (max - min + 1)) + min;
};
|
import { randomUUID } from "node:crypto";
import { createClient } from "redis";
import { Schema, Repository, Entity, EntityId } from "redis-om";
const client = createClient()
await client.connect();
const userSchema = new Schema("User", {
age: { type: "number" }
});
interface UserEntity extends Entity {
age: number
}
const userRepository = new Repository(userSchema, client);
await userRepository.createIndex();
for (let i = 0; i < 30; i++) {
await userRepository.save({
[EntityId]: `${Date.now()}:${randomUUID()}`,
age: between(18, 90)
});
}
const users = await findBetweenAge(userRepository, 30, 50);
console.log(users)
await client.disconnect();
async function findBetweenAge(repository: Repository, min: number, max: number): Promise<Array<UserEntity>> {
return <Array<UserEntity>>await repository.search().where("age").between(min, max).returnAll();
}
function between(min: number, max: number) {
return Math.round(Math.random() * (max - min + 1)) + min;
};
|
Open Issues this proposal fixes
Benchmarks
There were a lot of benchmarks made and they can be found here