Nsql Cache 
Advanced Cache Layer for NoSQL databases

Installation |
API |
Support
nsql-cache is an advanced cache layer for NoSQL database clients. It is vendor agnostic and currently has the following database adapters:
Highlight
- Have multiple cache stores with different TTL thanks to node-cache-manager.
- LRU memory cache out of the box to speed up your application right away.
- Advanced cache (when using node_redis) that automatically saves the queries in Redis Sets by entity Kind. You can then have an infinite TTL (time to live) for the queries and their cache is invalidated only when an entity of the same Kind is added, updated or deleted.
Please don’t forget to star this repo if you found it useful :)
Installation
To create a nsql-cache instance, we need to provide a database adapter. In the examples here we will use the Google Datastore adapter.
npm install nsql-cache nsql-cache-datastore --save
yarn add nsql-cache nsql-cache-datastore
Create a cache instance
const Datastore = require('@google-cloud/datastore');
const NsqlCache = require('nsql-cache');
const dsAdapter = require('nsql-cache-datastore');
const datastore = new Datastore();
const db = dsAdapter(datastore);
const cache = new NsqlCache({ db });
Great! You now have a LRU memory cache with the following configuration:
- Maximum number of objects in cache: 100
- TTL (time to live) for entities (Key fetch): 10 minutes
- TTL for queries: 5 second
Configuration
To change the default TTL you can pass a configuration object when creating the cache instance.
const cache = new NsqlCache({
db,
config: {
ttl: {
keys: 60 * 10,
queries: 5,
}
}
});
For the complete configuration options, please refer to the API documentation below.
Wrap database client
By default, if the database adapter supports it, nsql-cache will wrap the database client in order to fully manage the cache for you.
If you don't want the database client to be wrapped, disable it in the configuration. You are then responsible to manage the cache. Look at the examples in the nsql-cache-datastore repository to see how to manage the cache yourself.
const cache = new NsqlCache({
db,
config: {
...
wrapClient: false,
}
});
Core concepts
nsql-cache is based on the core concepts of NoSQL database data agregation. As there are no JOIN operation expanding over multiple tables, the only two ways to fetch entities are:
- by Key(s) - the fastest way to retrieve entity(ies) from the database
- by Query - on a single entity type. ex:
SELECT * from Posts (type) FILTER type=tech
Queries
As you might have noticed in the default configuration above, queries have a very short TTL (5 seconds). This is because as soon as we create, update or delete an entity, any query that we have cached might be out of sync.
Depending on the use cases, 5 seconds might be acceptable or not. Remember that you can always disable the cache or lower the TTL on specific queries. You might also decide that you never want the queries to be cached, in such case set the global TTL value for queries to -1.
But there is a better way: providing a Redis client.
Multi cache stores
nsql-cache uses the node-cache-manager library to handle the cache. This means that you can have multiple cache store with different TTL on each one. The most interesting one for us is the cache-manager-redis-store
as it is a Redis client that supports mget()
and mset()
which is what we need for our batch operations (get, save, delete multiple keys).
First, add the dependency to your package.json
npm install cache-manager-redis-store --save
yarn add cache-manager-redis-store
Then provide the cache store to the nsql-cache constructor.
...
const redisStore = require('cache-manager-redis-store');
const cache = new NsqlCache({
db,
stores: [
{
store: 'memory',
max: 100,
},
{
store: redisStore,
host: 'localhost',
port: 6379,
auth_pass: 'xxxx'
}
]
})
We now have two cache stores with different TTL values in each one.
- memory store: ttl keys = 5 minutes, ttl queries = 5 seconds
- redis store: ttl keys = 1 day, ttl queries = infinite (0)
If you only wants the Redis cache, remove the memory store from the array.
Infinite cache for queries? Yes! nsql-cache keeps a reference of each query by Entity Kind in a Redis Set so it can invalidate the cache when an entity of the same Kind is added, updated or deleted.
You can of course change the default TTL for each store:
...
const cache = new NsqlCache({
db,
stores: [
{ store: 'memory' },
{ store: redisStore }
],
config: {
ttl: {
memory: {
keys: 60 * 60,
queries: 30
},
redis: {
keys: 60 * 60 * 48,
queries: 60 * 60 * 24
},
}
}
})
API
NsqlCache Instantiation
new NsqlCache(options)
Note on stores: Each store is an object that will be passed to the cacheManager.caching()
method. Read the docs to learn more about node cache manager.
Important: Since version 2.7.0, cache-manager supports mset()
, mget()
and del()
for multiple keys batch operation. The store(s) you provide here must support this feature.
At the time of this writting only the "memory" store and the "node-cache-manager-redis-store" support it.
If you provide a store that does not support mset/mget you can still use nsql-cache but you won't be able to set or retrieve multiple keys/queries in batch.
The config object has the following properties (showing default values):
const config = {
ttl: {
keys: 60 * 10,
queries: 5,
memory: {
keys: 60 * 10,
queries: 5,
},
redis: {
keys: 60 * 60 * 24,
queries: 0,
},
},
cachePrefix: {
keys: 'gck:',
queries: 'gcq:',
},
wrapClient: true,
hashCacheKeys: true,
global: true,
};
cache.keys
read(key|Array<key> [, options, fetchHandler]])
Helper that will:
- check the cache
- if no entity(ies) are found in the cache, fetch the entity(ies) in the database
- prime the cache with the entity(ies) data
Arguments
-
key: a entity Key or an Array of entity Keys. If it is an array of keys, only the keys that are not found in the cache will be passed to the fetchHandler.
-
options: an optional object of options.
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
- fetchHandler: an optional function handler to fetch the keys. If it is not provided it will default to the database adapter
getEntity(keys)
method.
const { datastore, cache } = require('./db');
const key = datastore.key(['Company', 'Google']);
cache.keys.read(key)
.then(entity => console.log(entity));
const fetchHandler = (key) => (
datastore.get(key)
.then((company) => {
const query = datastore.createQuery('Posts')
.filter('companyId', key.id)
.limit(10);
return cache.queries.get(query)
.then(response => {
company.posts = response[0];
return company;
});
});
);
cache.keys.read(key, fetchHandler)
.then((entity) => {
console.log(entity);
});
cache.keys.read(key, { ttl: 900 }, fetchHandler)
.then((entity) => {
console.log(entity);
});
get(key)
Retrieve an entity from the cache passing a database Key object
const key = datastore.key(['Company', 'Google']);
cache.keys.get(key).then(entity => {
console.log(entity);
});
mget(key [, key2, key3, ...])
Retrieve multiple entities from the cache.
const key1 = datastore.key(['Company', 'Google']);
const key2 = datastore.key(['Company', 'Twitter']);
cache.keys.mget(key1, key2).then(entities => {
console.log(entities[0]);
console.log(entities[1]);
});
set(key, entity [, options])
Add an entity in the cache.
- options: an optional object of options.
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
const key = datastore.key(['Company', 'Google']);
datastore.get(key).then(response => {
cache.keys.set(key, response[0]).then(() => {
});
});
mset(key, entity [, key(n), entity(n), options])
Add multiple entities in the cache.
- options: an optional object of options.
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
const key1 = datastore.key(['Company', 'Google']);
const key2 = datastore.key(['Company', 'Twitter']);
datastore.get([key1, key2]).then(response => {
const [entities] = response;
cache.keys.mset(key1, entities[0], key2, entities[1], { ttl: 240 }).then(() => ...);
});
del(key [, key2, key3, ...])
Delete one or multiple keys from the cache
const key1 = datastore.key(['Company', 'Google']);
const key2 = datastore.key(['Company', 'Twitter']);
cache.keys.del(key1).then(() => { ... });
cache.keys.del(key1, key2).then(() => { ... });
cache.queries
read(query [, options, fetchHandler])
Helper that will:
- check the cache
- if the query is not found in the cache, run the query on the database.
- prime the cache with the response of the Query.
Arguments
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
- fetchHandler: an optional function handler to fetch the query. If it is not provided it will default to the database adapter
runQuery(query)
method.
const { datastore, cache } = require('./db');
const query = datastore
.createQuery('Post')
.filter('category', 'tech')
.order('updatedOn')
.limit(10);
cache.queries.read(query)
.then(response => console.log(response[0]));
const fetchHandler = (q) => (
q.run()
.then((response) => {
const [entities, meta] = response;
return [entities, meta];
});
);
cache.queries.read(query, fetchHandler)
.then((response) => {
console.log(response[0]);
console.log(response[1].moreResults);
});
get(query)
Retrieve a query from the cache passing a Query object
const query = datastore.createQuery('Post').filter('category', 'tech');
cache.queries.get(query).then(response => {
console.log(response[0]);
});
mget(query [, query2, query3, ...])
Retrieve multiple queries from the cache.
const query1 = datastore.createQuery('Post').filter('category', 'tech');
const query2 = datastore.createQuery('User').filter('score', '>', 1000);
cache.queries.mget(query1, query2).then(response => {
console.log(response[0]);
console.log(response[1]);
});
set(query, data [, options])
Add a query in the cache
- options: an optional object of options.
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
const query = datastore.createQuery('Post').filter('category', 'tech');
query.run().then(response => {
cache.queries.set(query, response).then(response => {
console.log(response[0]);
});
});
mset(query, data [, query(n), data(n), options])
Add multiple queries in the cache.
- options: an optional object of options.
{
ttl: 900,
}
{
ttl: { memory: 300, redis: 3600 }
}
const query1 = datastore.createQuery('Post').filter('category', 'tech');
const query2 = datastore.createQuery('User').filter('score', '>', 1000);
Promise.all([query1.run(), query2.run()])
.then(result => {
cache.queries.mset(query1, result[0], query2, result[1], { ttl: 900 })
.then(() => ...);
});
kset(key, value, entityKind|Array<EntityKind> [, options])
Important: this method is only available if you have provided a Redis cache store during initialization.
- options: an optional object of options.
{
ttl: 900,
}
If you have a complex data resulting from several queries and targeting one or multiple Entiy Kind, you can cache it and link the Entity Kind(s) to it. Let's see it in an example:
const { datastore, cache } = require('./db');
const fetchHomeData = () => {
cache.get('website:home').then(data => {
if (data) {
return data;
}
const queryPosts = datastore
.createQuery('Posts')
.filter('category', 'tech')
.limit(10)
.order('publishedOn', { descending: true });
const queryTopStories = datastore
.createQuery('Posts')
.order('score', { descending: true })
.limit(3);
const queryProducts = datastore.createQuery('Products').filter('featured', true);
return Promise.all([queryPosts.run(), queryTopStories.run(), queryProducts.run()]).then(result => {
const homeData = {
posts: result[0],
topStories: result[1],
products: result[2],
};
return cache.queries.kset('website:home', homeData, ['Posts', 'Products']);
});
});
};
clearQueriesByKind(entityKind|Array<EntityKind>)
Delete all the queries linked to one or several Entity Kinds.
const key = datastore.key(['Posts']);
const data = { title: 'My new post', text: 'Body text of the post' };
datastore.save({ key, data })
.then(() => {
cache.queries.clearQueriesByKind(['Posts'])
.then(() => {
...
});
});
del(query [, query2, query3, ...])
Delete one or multiple queries from the cache
const query1 = datastore.createQuery('Post').filter('category', 'tech');
const query2 = datastore.createQuery('User').filter('score', '>', 1000);
cache.queries.del(query1).then(() => { ... });
cache.queries.del(query1, query2).then(() => { ... });
"cache-manager" methods bindings (get, mget, set, mset, del, reset)
nsql-cache has bindings set to the underlying "cache-manager" methods get, mget, set, mset, del and reset. This allows you to cache any other data. Refer to the cache-manager documentation.
const { cache } = require('./db');
cache.set('my-key', { data: 123 }).then(() => ...);
cache.get('my-key').then((data) => console.log(data));
cache.mset('my-key1', true, 'my-key2', 123, { ttl: 60 }).then(() => ...);
cache.mget('my-key1', 'my-key2').then((data) => {
const [data1, data2] = data;
});
cache.del(['my-key1', 'my-key2']).then(() => ...);
cache.reset().then(() => ...);
Development setup
Install the dependencies and run the tests. gstore-caches lints the code with eslint and formats it with prettier so make sure you have both pluggins installed in your IDE.
npm install
npm test
npm run coverage
npm run prettier
Release History
Meta
Sébastien Loix – @sebloix
Distributed under the MIT license. See LICENSE
for more information.
https://github.com/sebelga
Contributing
- Fork it (https://github.com/sebelga/nsql-cache/fork)
- Create your feature branch (
git checkout -b feature/fooBar
)
- Commit your changes (
git commit -am 'Add some fooBar'
)
- Push to the branch (
git push origin feature/fooBar
)
- Rebase your feature branch and squash (
git rebase -i master
)
- Create a new Pull Request