Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
@nrk/nodecache-as-promised
Advanced tools
NodeJs in-memory cache with Promise support. Extendable with middlewares. Middlewares provided: Distributed invalidation and persistence of cache misses
Fast and resilient cache for NodeJs targeting high-volume sites
npm install @nrk/nodecache-as-promised --save
Sometimes Node.js needs to do some heavy lifting, performing CPU or network intensive tasks and yet respond quickly on incoming requests. For repetitive tasks like Server side rendering of markup or parsing big JSON responses caching can give the application a great performance boost. Since many requests may hit the server concurrently, you do not want more than one worker to run for a given resource at the same time. In addition - serving stale content when a backend resource is down may save your day! The intention of nodecache-as-promised
is to give you a fairly simple interface, yet powerful application cache, with fine-grained control over caching behaviour.
nodecache-as-promised
is inspired by how Varnish works. It is not intended to replace Varnish (but works great in combination). In general Varnish works great as an edge/burst/failover cache, in addition to reverse proxying and loadbalancing. There exists several other cache solutions on NPM, but they're often too basic or too attached to a combination of perquisites that does not fit all needs of an application cache.
Parsing a json-file at around 47kb (file contents are cached at startup). Using a Macbook pro, mid 2015, 16gb ram, i7 CPU.
The image shows a graph from running the test script npm run perf:nocache-cache-file -- --type=linear
. At around 1300 iterations the event loop starts lagging, and at around 1500 iterations the process stops responding. It displays that even natively optimized JSON.parse could be a bottleneck when fetching remote API-data for rendring. (React.render
would be even slower)
The second image is a graph from running test script npm run perf:cache -- --type=linear
. At around 3.1 million iterations the event loop starts lagging, and at around 3.4 million iterations the process runs out of memory and crashes. The graph has no relation to how fast JSON.parse is, but what speed is achievable by skipping it altogether (ie. Promise
-processing)
Create a new inMemoryCache
instance using a factory method. This instance may be extended by the distCache
and/or persistentCache
middlewares (.use(..)
).
Creating a new instance
import inMemoryCache from '@nrk/nodecache-as-promised'
const cache = inMemoryCache(options)
An object containing configuration
Object
. Initial key/value set to prefill cache. Default: {}
Number
. Max key count before LRU-cache evicts object. Default: 1000
Number
. Max time before a (stale) key is evicted by LRU-cache (in ms). Default: 172800000
(48h)Object with log4j-facade
. Used to log internal work. Default: console
Creating a new distCache middleware instance
import cache, {distCache} from '@nrk/nodecache-as-promised'
const cache = inMemoryCache()
cache.use(distCache(redisFactory, namespace))
Parameters that must be provided upon creation:
Function
. A function that returns an ioredis compatible redisClient.String
. Pub/sub-namespace used for distributed expiriesCreating a new persistentCache middleware instance
import cache, {persistentCache} from '@nrk/nodecache-as-promised'
const cache = inMemoryCache()
cache.use(persistentCache(redisFactory, options))
Parameters that must be provided upon creation:
Function
. A function that returns an ioredis compatible redisClient.RegExp
. Keys matching this regexp is not persisted to cache. Default null
String
. Prefix used when storing keys in redis.Number
. Used to calculate TTL in redis (before auto removal), ie. object.TTL + grace. Default 86400000
(24h)Boolean
. Flag to choose if persisted cache is loaded from redis on middleware creation. Default true
When the factory is created (with or without middlewares), the following methods may be used.
Get an item from the cache.
const {value} = cache.get('myKey')
console.log(value)
Using parameter options
- the function either fetches a value from cache or executes provided worker if the cache is stale or cold. The worker will set the cache key if ran and thus returns a Promise
cache.get('myKey', options)
.then(({value}) => {
console.log(value)
})
Configuration for the newly created object
function
. A function that returns a promise which resolves new value to be set in cache.Number
. Ttl (in ms) before cached object becomes stale. Default: 86400000
(24h)Number
. max time allowed to run promise. Default: 5000
Number
. delta wait (in ms) before retrying promise, when stale. Default: 10000
NOTE: It might seem a bit strange to set cache values using .get
- but it is to avoid a series of operations using .get()
to check if a value exists, then call .set()
, and finally running .get()
once more (making queing difficult). In summary: .get()
returns a value from cache or a provided worker.
Set a new cache value.
// set a cache value that becomes stale after 1 minute
cache.set('myKey', 'someData', 60 * 1000)
If ttl
-parameter is omitted, a default will be used: 86400000
(24h)
Check if a key is in the cache, without updating the recent-ness or deleting it for being stale.
Deletes a key out of the cache.
Mark keys as stale (ie. set TTL = 0)
cache.expire(['myKey*', 'anotherKey'])
Asterisk *
is used for wildcards
Get all keys as an array of strings stored in cache
Get all values as an array of all values in cache
Get all entries as a Map of all keys and values in cache
Clear the cache entirely, throwing away all values.
Add callback to be called when an item is evicted by LRU-cache. Used to do cleanup
const cb = (key, value) => cleanup(key, value)
cache.addDisposer(cb)
Remove callback attached to LRU-cache
cache.removeDisposer(cb)
Prints debug information about current cache (ie. hot keys, stale keys, keys in waiting state etc). Use extraData
to add custom properties to the debug info, eg. hostname.
cache.debug({hostname: os.hostname()})
Note! These examples are written using ES2015 syntax. The lib is exported using Babel as CJS modules
import inMemoryCache from '@nrk/nodecache-as-promised'
const cache = inMemoryCache({ /* options */})
// imiplicit set cache on miss, or use cached value
cache.get('key', { worker: () => Promise.resolve({hello: 'world'}) })
.then((data) => {
console.log(data)
// {
// value: {
// hello: 'world'
// },
// created: 123456789,
// cache: 'miss',
// TTL: 86400000
// }
})
import inMemoryCache from '@nrk/nodecache-as-promised';
const cache = inMemoryCache({
initial: { // initial state
foo: 'bar'
},
maxLength: 1000, // LRU max object count
maxAge: 24 * 60 * 60 * 1000 // LRU max age in ms
})
// set/overwrite cache key
cache.set('key', {hello: 'world'})
// imiplicit set cache on miss, or use cached value
cache.get('anotherkey', {
worker: () => Promise.resolve({hello: 'world'}),
ttl: 60 * 1000, // TTL for cached object, in ms
workerTimeout: 5 * 1000, // worker timeout, in ms
deltaWait: 5 * 1000, // wait time, if worker fails
}).then((data) => {
console.log(data)
// {
// value: {
// hello: 'world'
// },
// created: 123456789,
// cache: 'miss',
// TTL: 86400000
// }
})
Distributed expire and persisting of cache misses to Redis are provided as middlewares, ie. extending the in-memory cache interceptors. Writing your own middlewares using pub/sub from rabbitMQ, zeroMQ, persisting to a NAS, counting hit/miss-ratios should be easy.
import inMemoryCache, {distCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'
// a factory function that returns a redisClient
const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({initial: {fooKey: 'bar'}})
cache.use(distCache(redisFactory, 'namespace'))
// publish to redis (using wildcard)
cache.expire(['foo*'])
setTimeout(() => {
cache.get('fooKey').then(console.log)
// expired in server # 1 + 2
// {value: {fooKey: 'bar'}, created: 123456789, cache: 'stale', TTL: 86400000}
}, 1000)
import inMemoryCache, {persistentCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'
const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({/* options */})
cache.use(persistentCache(
redisFactory,
{
keySpace: 'myCache', // key prefix used when storing in redis
grace: 60 * 60 // auto expire unused keys in Redis after TTL + grace seconds
}
))
cache.get('key', { worker: () => Promise.resolve('hello') })
// will store a key in redis, using key: myCache-<key>
// {value: 'hello', created: 123456789, cache: 'hit', TTL: 60000}
import inMemoryCache, {distCache, persistentCache} from '@nrk/nodecache-as-promised'
import Redis from 'ioredis'
const redisFactory = () => new Redis(/* options */)
const cache = inMemoryCache({/* options */})
cache.use(distCache(redisFactory, 'namespace'))
cache.use(persistentCache(
redisFactory,
{
keySpace: 'myCache', // key prefix used when storing in redis
grace: 60 * 60 // auto expire unused keys in Redis after TTL + grace seconds
}
))
cache.expire(['foo*']) // distributed expire of all keys starting with foo
cache.get('key', {
worker: () => Promise.resolve('hello'),
ttl: 60000, // in ms
workerTimeout: 5000,
deltaWait: 5000
}).then(console.log)
// will store a key in redis, using key: myCache-<key>
// {value: 'hello', created: 123456789, cache: 'miss', TTL: 60000}
First clone the repo and install its dependencies:
git clone git@github.com:nrkno/nodecache-as-promised.git
git checkout -b feature/my-changes
cd nodecache-as-promised
npm install && npm run build && npm run test
A middleware consists of three parts:
next
parameter that runs the next function in the middleware chain)Lets say you want to build a middleware that notifies some other part of your application that a new value has been set (eg. using RxJs streams).
Here's an example on how to achieve this:
// export namespace to be applied in inMemoryCache.use().
export const streamingMiddleware = (onSet, onDispose) => (cacheInstance) => {
// create a function that runs before the others in the middleware chain
const set = (key, value, next) => {
onSet(key, value)
next(key, value)
}
// use functionality exposed by the inMemoryCache instance
cacheInstance.addDisposer(onDispose)
// export facade
return {
set
}
}
After having applied changes, remember to build and run/fix tests before pushing the changes upstream.
# run the tests, generate code coverage report
npm run test
# inspect code coverage
open ./coverage/lcov-report/index.html
# update the code
npm run build
git commit -am "Add my changes"
git push origin feature/my-changes
# then make a PR to the master branch,
# and assign one of the maintainers to review your code
NOTE! Please make sure to keep commits small and clean (that the commit message actually refers to the updated files). Stylistically, make sure the commit message is Capitalized and starts with a verb in the present tense (eg.
Add minification support
).
MIT © NRK
FAQs
NodeJs in-memory cache with Promise support. Extendable with middlewares. Middlewares provided: Distributed invalidation and persistence of cache misses
The npm package @nrk/nodecache-as-promised receives a total of 76 weekly downloads. As such, @nrk/nodecache-as-promised popularity was classified as not popular.
We found that @nrk/nodecache-as-promised demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 163 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.