@nrk/nodecache-as-promised
Advanced tools
Comparing version 1.0.3 to 1.0.4
@@ -5,4 +5,8 @@ { | ||
"license": "MIT", | ||
"version": "1.0.3", | ||
"description": "NodeJs in-memory cache. Optional distributed capabilites using Redis pub/sub + persistence", | ||
"bugs": { | ||
"url": "https://github.com/nrkno/nodecache-as-promised/issues" | ||
}, | ||
"homepage": "https://github.com/nrkno/nodecache-as-promised#readme", | ||
"version": "1.0.4", | ||
"description": "NodeJs in-memory cache with Promise support. Extendable with middlewares. Middlewares provided: Distributed invalidation and persistence of cache misses", | ||
"main": "lib/index.js", | ||
@@ -9,0 +13,0 @@ "scripts": { |
@@ -21,3 +21,3 @@ # @nrk/nodecache-as-promised | ||
## Motivation | ||
Sometimes Node.js needs to do some heavy lifting, performing CPU or network intensive tasks and yet respond quickly on incoming requests. For repetitive tasks like Server side rendering of markup or parsing big JSON responses caching can give the application a great performance boost. Since many requests may hit the server concurrently, you do not want more than *one* worker to run for a given resource at the same time. In addition - serving stale content when a backend resource is down may save your day! The intention of `nodecache-as-promised` is to give you a fairly simple, yet powerful application cache, with fine-grained control over caching behaviour. | ||
Sometimes Node.js needs to do some heavy lifting, performing CPU or network intensive tasks and yet respond quickly on incoming requests. For repetitive tasks like Server side rendering of markup or parsing big JSON responses caching can give the application a great performance boost. Since many requests may hit the server concurrently, you do not want more than *one* worker to run for a given resource at the same time. In addition - serving stale content when a backend resource is down may save your day! The intention of `nodecache-as-promised` is to give you a fairly simple interface, yet powerful application cache, with fine-grained control over caching behaviour. | ||
@@ -27,6 +27,6 @@ `nodecache-as-promised` is inspired by how [Varnish](https://varnish-cache.org/) works. It is not intended to replace Varnish (but works great in combination). In general Varnish works great as an edge/burst/failover cache, in addition to reverse proxying and loadbalancing. There exists several other cache solutions on NPM, but they're often too basic or too attached to a combination of perquisites that does not fit all needs of an application cache. | ||
### Features | ||
- __In-memory cache__ is used as primary storage since it will always be faster than parsing and fetching data over network. An [LRU-cache](https://www.npmjs.com/package/lru-cache) is enabled to constrain the amount of memory used. | ||
- __In-memory cache__ is used as primary storage since it will always be faster than parsing and fetching data from disk or via network. An [LRU-cache](https://www.npmjs.com/package/lru-cache) is enabled to constrain the amount of memory used. | ||
- __Caches are filled using worker promises__ since cached objects often are depending on async operations. (RxJs)[https://www.npmjs.com/package/rxjs] is used to queue concurrent requests for the same key; thus ensuring that only __one__ worker is performed when cached content is missing/stale. | ||
- __Caching of custom class instances, functions and native objects__ such as Date, RegExp and redux stores are supported through in-memory caching. Non-serializable (using JSON.stringify) objects are filtered out in persistent caches though. | ||
- __Grace mode__ is used if a worker promise fails (eg. caused by failing backends), ie. stale cache is returned instead. | ||
- __Caching of custom class instances, functions and native objects__ such as Date, RegExp and Redux stores are supported through in-memory caching. Non-serializable (using JSON.stringify) objects are filtered out in persistent caches though. | ||
- __Grace mode__ is used if a worker fails (eg. caused by failing backends), ie. stale cache is returned instead. | ||
- __Avoidance of spamming backend resources__ using a configurable retry-wait parameter, serving either a stale object or rejection. | ||
@@ -140,4 +140,11 @@ - __Middleware support__ so you may create your own custom extensions. Provided middlewares: | ||
#### .has(key) | ||
Check if a key is in the cache, without updating the recent-ness or deleting it for being stale. | ||
#### .del(key) | ||
Deletes a key out of the cache. | ||
#### .expire(keys) | ||
Mark keys as stale | ||
Mark keys as stale (ie. set TTL = 0) | ||
```js | ||
@@ -149,2 +156,14 @@ cache.expire(['myKey*', 'anotherKey']) | ||
#### .keys() | ||
Get all keys as an array of strings stored in cache | ||
#### .values() | ||
Get all values as an array of all values in cache | ||
#### .entries() | ||
Get all entries as a Map of all keys and values in cache | ||
#### .clear() | ||
Clear the cache entirely, throwing away all values. | ||
#### .addDisposer(callback) | ||
@@ -225,3 +244,3 @@ Add callback to be called when an item is evicted by LRU-cache. Used to do cleanup | ||
### Distributed capabilites | ||
Distributed expire and persisting of cache misses to Redis are provided as middlewares, ie. wrapping the in-memory cache with a factory that intercepts function calls. It should therefore be easy to write your own middlewares using pub/sub from rabbitMQ, zeroMQ, persisting to a NAS, hit/miss-ratio to external measurments systems and more. | ||
Distributed expire and persisting of cache misses to Redis are provided as middlewares, ie. extending the in-memory cache interceptors. Writing your own middlewares using pub/sub from rabbitMQ, zeroMQ, persisting to a NAS, counting hit/miss-ratios should be easy. | ||
@@ -228,0 +247,0 @@ #### Distributed expire |
@@ -20,3 +20,4 @@ /* eslint max-nested-callbacks: 0 */ | ||
expect(cacheInstance).to.be.a(Object) | ||
expect(cacheInstance.cache.itemCount).to.equal(0) | ||
const info = cacheInstance.debug() | ||
expect(info.itemCount).to.equal(0) | ||
}) | ||
@@ -34,5 +35,6 @@ | ||
expect(cacheInstance).to.be.a(Object) | ||
expect(cacheInstance.cache.itemCount).to.equal(1) | ||
expect(cacheInstance.cache.get('hei').value).to.equal('verden') | ||
expect(cacheInstance.cache.get('hei').cache).to.equal('hit') | ||
const info = cacheInstance.debug() | ||
expect(info.itemCount).to.equal(1) | ||
expect(cacheInstance.get('hei').value).to.equal('verden') | ||
expect(cacheInstance.get('hei').cache).to.equal('hit') | ||
}) | ||
@@ -69,2 +71,33 @@ }) | ||
describe('-> has/del/clear', () => { | ||
let cacheInstance | ||
beforeEach(() => { | ||
cacheInstance = inMemoryCache({initial: preCached}) | ||
}) | ||
it('should return true if key exists in cache', () => { | ||
cacheInstance.set('key', 'value') | ||
expect(cacheInstance.has('key')).to.equal(true) | ||
}) | ||
it('should return false if key is not in cache', () => { | ||
expect(cacheInstance.has('key')).to.equal(false) | ||
}) | ||
it('should return false if key was deleted from cache', () => { | ||
cacheInstance.set('key', 'value') | ||
cacheInstance.del('key') | ||
expect(cacheInstance.has('key')).to.equal(false) | ||
}) | ||
it('should return false if key was deleted from cache', () => { | ||
cacheInstance.set('key1', 'value') | ||
cacheInstance.set('key2', 'value') | ||
cacheInstance.clear() | ||
expect(cacheInstance.has('key1')).to.equal(false) | ||
expect(cacheInstance.has('key2')).to.equal(false) | ||
}) | ||
}) | ||
describe('-> cold/stale cache', () => { | ||
@@ -77,4 +110,3 @@ let cacheInstance | ||
cacheInstance = inMemoryCache({initial: preCached}) | ||
const staleObj = {...cacheInstance.cache.get(dummyKey), TTL: -1000} | ||
cacheInstance.cache.set(dummyKey, staleObj) | ||
cacheInstance.set(dummyKey, cacheInstance.get(dummyKey).value, -1000) | ||
now = Date.now() | ||
@@ -125,4 +157,3 @@ spy = sinon.spy(() => new Promise((resolve) => { | ||
cacheInstance = inMemoryCache({initial: preCached}) | ||
const staleObj = {...cacheInstance.cache.get(dummyKey), TTL: -1000} | ||
cacheInstance.cache.set(dummyKey, staleObj) | ||
cacheInstance.set(dummyKey, cacheInstance.get(dummyKey).value, -1000) | ||
now = Date.now() | ||
@@ -152,4 +183,3 @@ spy = sinon.spy(() => new Promise((resolve) => { | ||
cacheInstance = inMemoryCache({initial: preCached, log: dummyLog}) | ||
const staleObj = {...cacheInstance.cache.get(dummyKey), TTL: -1000} | ||
cacheInstance.cache.set(dummyKey, staleObj) | ||
cacheInstance.set(dummyKey, cacheInstance.get(dummyKey).value, -1000) | ||
}) | ||
@@ -216,4 +246,3 @@ | ||
cacheInstance = inMemoryCache({initial: preCached, log: dummyLog}) | ||
const staleObj = {...cacheInstance.cache.get(dummyKey), TTL: -1000} | ||
cacheInstance.cache.set(dummyKey, staleObj) | ||
cacheInstance.set(dummyKey, cacheInstance.get(dummyKey).value, -1000) | ||
}) | ||
@@ -333,2 +362,36 @@ | ||
describe('-> keys/values/entries', () => { | ||
let cacheInstance | ||
beforeEach(() => { | ||
cacheInstance = inMemoryCache({initial: { | ||
'house/1': {hei: 'verden1'}, | ||
'house/2': {hei: 'verden2'}, | ||
'guest/2': {hei: 'verden3'} | ||
}}) | ||
}) | ||
it('should return keys', () => { | ||
expect(cacheInstance.keys()).to.eql(['house/1', 'house/2', 'guest/2'].reverse()) | ||
}) | ||
it('should return values', () => { | ||
expect(cacheInstance | ||
.values() | ||
.map(({value}) => value)) | ||
.to.eql([{hei: 'verden3'}, {hei: 'verden2'}, {hei: 'verden1'}]) | ||
}) | ||
it('should return entries', () => { | ||
expect(Array.from(cacheInstance.entries()) | ||
.map(([key, {value}]) => { | ||
return {[key]: value} | ||
})).to.eql([ | ||
{'guest/2': {hei: 'verden3'}}, | ||
{'house/2': {hei: 'verden2'}}, | ||
{'house/1': {hei: 'verden1'}} | ||
]) | ||
}) | ||
}) | ||
describe('-> expire', () => { | ||
@@ -347,5 +410,5 @@ let cacheInstance | ||
cacheInstance.expire(['house/*']) | ||
expect(cacheInstance.cache.get('house/1').TTL).to.equal(0) | ||
expect(cacheInstance.cache.get('house/2').TTL).to.equal(0) | ||
expect(cacheInstance.cache.get('guest/2').TTL).not.to.equal(0) | ||
expect(cacheInstance.get('house/1').TTL).to.equal(0) | ||
expect(cacheInstance.get('house/2').TTL).to.equal(0) | ||
expect(cacheInstance.get('guest/2').TTL).not.to.equal(0) | ||
}) | ||
@@ -355,5 +418,5 @@ | ||
cacheInstance.expire(['house/*', 'guest/2']) | ||
expect(cacheInstance.cache.get('house/1').TTL).to.equal(0) | ||
expect(cacheInstance.cache.get('house/2').TTL).to.equal(0) | ||
expect(cacheInstance.cache.get('guest/2').TTL).to.equal(0) | ||
expect(cacheInstance.get('house/1').TTL).to.equal(0) | ||
expect(cacheInstance.get('house/2').TTL).to.equal(0) | ||
expect(cacheInstance.get('guest/2').TTL).to.equal(0) | ||
}) | ||
@@ -372,4 +435,5 @@ }) | ||
}) | ||
expect(cacheInstance.cache.itemCount).to.equal(2) | ||
expect(cacheInstance.cache.keys()).to.eql(['guest/3', 'house/2']) | ||
const info = cacheInstance.debug() | ||
expect(info.itemCount).to.equal(2) | ||
expect(cacheInstance.keys()).to.eql(['guest/3', 'house/2']) | ||
}) | ||
@@ -396,6 +460,27 @@ | ||
expect(spy.callCount).to.equal(1) | ||
expect(cacheInstance.cache.itemCount).to.equal(2) | ||
expect(cacheInstance.cache.keys()).to.eql(['guest/4', 'guest/3']) | ||
const info = cacheInstance.debug() | ||
expect(info.itemCount).to.equal(2) | ||
expect(cacheInstance.keys()).to.eql(['guest/4', 'guest/3']) | ||
}) | ||
it('should call dispose on del operations', () => { | ||
const cacheInstance = inMemoryCache({maxLength: 2}) | ||
const spy = sinon.spy() | ||
cacheInstance.addDisposer(spy) | ||
cacheInstance.set('house/1', {hei: 'verden'}) | ||
cacheInstance.del('house/1') | ||
expect(spy.called).to.equal(true) | ||
cacheInstance.removeDisposer(spy) | ||
}) | ||
it('should call dispose on clear operations', () => { | ||
const cacheInstance = inMemoryCache({maxLength: 2}) | ||
const spy = sinon.spy() | ||
cacheInstance.addDisposer(spy) | ||
cacheInstance.set('house/1', {hei: 'verden'}) | ||
cacheInstance.clear() | ||
expect(spy.called).to.equal(true) | ||
cacheInstance.removeDisposer(spy) | ||
}) | ||
}) | ||
}) |
@@ -49,3 +49,3 @@ import distCache from '../' | ||
cache.expire(['hello']) | ||
expect(cache.cache.get('hello').TTL).to.equal(0) | ||
expect(cache.get('hello').TTL).to.equal(0) | ||
return cache.get('hello', {worker: spy}).then((obj) => { | ||
@@ -52,0 +52,0 @@ expect(obj.value).to.equal('world2') |
@@ -76,3 +76,16 @@ /** | ||
/** | ||
* @description add a callback to lruCache#dispose | ||
* @access public | ||
* @param {function} callback - a function to be called when a cache key is evicted | ||
* @returns {undefined} | ||
**/ | ||
const addDisposer = (cb) => disposers.push(cb) | ||
/** | ||
* @description remove a callback from lruCache#dispose | ||
* @access public | ||
* @param {function} callback - a function to be called when a cache key is evicted | ||
* @returns {undefined} | ||
**/ | ||
const removeDisposer = (cb) => (disposers = disposers.filter((disposer) => disposer && disposer !== cb)) | ||
@@ -93,2 +106,31 @@ | ||
/** | ||
* @description check if key exists in cache | ||
* @access public | ||
* @param {String} key - key in cache to lookup. | ||
* @returns {Boolean} - true|false key exists in cache | ||
**/ | ||
const has = (key) => { | ||
return cache.has(key) | ||
} | ||
/** | ||
* @description delete key from cache | ||
* @access public | ||
* @param {String} key - key in cache to delete | ||
* @returns {undefined} | ||
**/ | ||
const del = (key) => { | ||
cache.del(key) | ||
} | ||
/** | ||
* @description removes all cache entries | ||
* @access public | ||
* @returns {undefined} | ||
**/ | ||
const clear = () => { | ||
cache.reset() | ||
} | ||
/** | ||
* @description Create a job that subscribes to a rxJs-worker | ||
@@ -185,2 +227,3 @@ * @access private | ||
const get = (key, config = {}) => { | ||
// TODO: support stale-while-revalidate | ||
const { | ||
@@ -202,4 +245,32 @@ ttl = DEFAULT_CACHE_EXPIRE, | ||
/** | ||
* @description set value in cache | ||
* @description get keys from cache | ||
* @access public | ||
* @returns {Array<String>} - keys | ||
**/ | ||
const keys = () => cache.keys() | ||
/** | ||
* @description get values from cache | ||
* @access public | ||
* @returns {Array<Any>} - values | ||
**/ | ||
const values = () => cache.values() | ||
/** | ||
* @description get cache entries | ||
* @access public | ||
* @returns {Map<<String, Any>} - values | ||
**/ | ||
const entries = () => { | ||
const vals = values() | ||
return new Map(keys().reduce((acc, key, i) => { | ||
// console.log({[key]: vals[i]}) | ||
acc.push([key, vals[i]]) | ||
return acc | ||
}, [])) | ||
} | ||
/** | ||
* @description expire a cache key (ie. set TTL = 0) | ||
* @access public | ||
* @param {Array<String>} keys - array of keys to expire (supports * as wildcards, converted to .* regexp) | ||
@@ -238,13 +309,17 @@ * @returns {undefined} | ||
return { | ||
addDisposer, | ||
removeDisposer, | ||
get, | ||
set, | ||
has, | ||
del, | ||
keys, | ||
values, | ||
entries, | ||
clear, | ||
expire, | ||
addDisposer, | ||
removeDisposer, | ||
// helpers | ||
debug, | ||
log, | ||
maxLength, | ||
// for testing purposes | ||
cache, | ||
waiting | ||
@@ -251,0 +326,0 @@ } |
@@ -166,2 +166,39 @@ import persistentCache from '../' | ||
}) | ||
describe('-> del/clear', () => { | ||
let delSpy | ||
let cache | ||
beforeEach(() => { | ||
delSpy = sinon.spy((key, cb) => { | ||
if (key.indexOf('house/1') > -1) { | ||
return cb(null, 'ok') | ||
} | ||
cb(new Error('dummyerror'), null) | ||
}) | ||
cache = inMemoryCache({log: dummyLog}) | ||
cache.use(persistentCache(mockRedisFactory({del: delSpy}), {bootload: false})) | ||
}) | ||
it('should delete key from redis when a key is deleted from lru-cache', () => { | ||
return cache.del('house/1').then(() => { | ||
expect(delSpy.called).to.equal(true) | ||
}) | ||
}) | ||
it('should throw an error key from redis when a key is deleted from lru-cache', () => { | ||
return cache.del('key').catch(() => { | ||
expect(delSpy.called).to.equal(true) | ||
}) | ||
}) | ||
it('should delete all keys in redis with prefix', () => { | ||
sinon.stub(utils, 'deleteKeys').resolves() | ||
return cache.clear().then(() => { | ||
expect(utils.deleteKeys.called).to.equal(true) | ||
expect(utils.deleteKeys.args[0][0]).to.equal(`${pkg.name}-`) | ||
utils.deleteKeys.restore() | ||
}) | ||
}) | ||
}) | ||
}) |
import { | ||
deleteKey, | ||
deleteKeys, | ||
readKeys, | ||
@@ -52,6 +53,5 @@ extractKeyFromRedis, | ||
}, 20) | ||
const y = (keysToRead, cb) => { | ||
mgetSpy = sinon.spy((keysToRead, cb) => { | ||
cb(null, [JSON.stringify({hei: 'verden'})]) | ||
} | ||
mgetSpy = sinon.spy(y) | ||
}) | ||
redisClient = mockRedisFactory({mget: mgetSpy}, {events})() | ||
@@ -104,2 +104,20 @@ return loadObjects('test-localhost8080', redisClient, dummyLog).then((results) => { | ||
describe('-> deleteKeys', () => { | ||
it('should delete all keys with prefix from redis', () => { | ||
const delSpy = sinon.spy((key, cb) => cb(null, 'ok')) | ||
const events = {} | ||
setTimeout(() => { | ||
events.data[0](['test-localhost8080-myKey']) | ||
events.end[0]() | ||
}, 100) | ||
const mgetSpy = sinon.spy((keysToRead, cb) => { | ||
cb(null, [JSON.stringify({hei: 'verden'})]) | ||
}) | ||
const redisClient = mockRedisFactory({del: delSpy, mget: mgetSpy}, {events})() | ||
return deleteKeys('asdf', redisClient).then((...args) => { | ||
expect(delSpy.called).to.equal(true) | ||
}) | ||
}) | ||
}) | ||
describe('-> readKeys', () => { | ||
@@ -111,4 +129,3 @@ let redisClient | ||
const values = Object.keys(redisCache).map((key) => redisCache[key]) | ||
const p = (keys, cb) => cb(null, values) | ||
mgetSpy = sinon.spy(p) | ||
mgetSpy = sinon.spy((keys, cb) => cb(null, values)) | ||
redisClient = mockRedisFactory({mget: mgetSpy})() | ||
@@ -115,0 +132,0 @@ return readKeys(Object.keys(redisCache), redisClient, dummyLog).then((result) => { |
@@ -7,2 +7,3 @@ /** | ||
deleteKey, | ||
deleteKeys, | ||
extractKeyFromRedis, | ||
@@ -45,3 +46,3 @@ getRedisKey, | ||
cacheInstance.log.debug(`Persist to key "${redisKey}"`) | ||
const objWithMeta = cacheInstance.cache.get(key) | ||
const objWithMeta = cacheInstance.get(key) | ||
redisClient.set(redisKey, JSON.stringify(objWithMeta), 'ex', Math.round((objWithMeta.TTL + grace) / 1000), (err) => { | ||
@@ -61,2 +62,14 @@ if (err) { | ||
const del = (key, next) => { | ||
return deleteKey(getRedisKey(cacheKeyPrefix, key), redisClient).then(() => { | ||
next(key) | ||
}) | ||
} | ||
const clear = (next) => { | ||
return deleteKeys(cacheKeyPrefix, redisClient).then(() => { | ||
next() | ||
}) | ||
} | ||
const load = () => { | ||
@@ -98,2 +111,4 @@ const then = Date.now() | ||
get, | ||
del, | ||
clear, | ||
load, | ||
@@ -100,0 +115,0 @@ debug, |
@@ -18,18 +18,6 @@ /** | ||
export const getRedisKey = (prefix, key) => { | ||
export const getRedisKey = (prefix, key = '') => { | ||
return `${[prefix, key].join('-')}` | ||
} | ||
export const deleteKey = (key, redisClient) => { | ||
return new Promise((resolve, reject) => { | ||
redisClient.del(key, (err, res) => { | ||
if (err) { | ||
reject(err) | ||
return | ||
} | ||
resolve(res) | ||
}) | ||
}) | ||
} | ||
export const readKeys = (keys, redisClient, log) => { | ||
@@ -87,2 +75,20 @@ if (keys.length === 0) { | ||
export const deleteKey = (key, redisClient) => { | ||
return new Promise((resolve, reject) => { | ||
redisClient.del(key, (err, res) => { | ||
if (err) { | ||
reject(err) | ||
return | ||
} | ||
resolve(res) | ||
}) | ||
}) | ||
} | ||
export const deleteKeys = (cacheKeyPrefix, redisClient) => { | ||
return scanKeys(cacheKeyPrefix, redisClient).then((keys) => { | ||
return Promise.all(keys.map((key) => deleteKey(key, redisClient))) | ||
}) | ||
} | ||
export const loadObjects = (cacheKeyPrefix, redisClient, log) => { | ||
@@ -89,0 +95,0 @@ return scanKeys(cacheKeyPrefix, redisClient) |
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
No bug tracker
MaintenancePackage does not have a linked bug tracker in package.json.
Found 1 instance in 1 package
No website
QualityPackage does not have a website.
Found 1 instance in 1 package
609934
65
4267
0
0
368