Cache-cache
Motivation
You are right, there are already a few package that does that:
So why bother re-writting one from scratch ? Here are the reasons:
- They are quite old, some are a mix of callback and promises, hard to maintain
- Some packages doesn't handle multiple caches layer (in-memory/redis)
- Generally expose a complex API to end user
- When using redis as cache, they weren't handling failures from upstream servers (timeout, errors).
Our objectives when writting that package were simple:
- Simply to use
- Easy to read how it works
- Typings (avoid quite a lof of errors)
- Handle failures correctly:
- When memoizing function we wanted to always do something, even if our cache is down, we wanted to call the expensive function instead of reporting an error even if it's slower.
API
Config
By default the package is configured to use the MEMORY
cache layer with a TTL of 15s, you can of course change those default:
Each method accept a whole configuration, if you want to have a common one, you can use useAsDefault
:
import { getMemoize, useAsDefault, AvailableCacheLayer } from '@rlvt/cache-cache'
useAsDefault({
layerConfigs: {
[AvailableCacheLayer.MEMORY]: {
ttl: 10 * 1000
},
[AvailableCachelayer.REDIS]: {
ttl: 30 * 60 * 1000
}
},
layerOrder: [
AvailableCacheLayer.MEMORY,
AvailableCacheLayer.REDIS
]
})
const expensiveFunction = async () => {
return {}
}
const noMoreExpensive = getMemoize(expensiveFunction, {
layerConfigs: {
[AvailableCacheLayer.MEMORY]: {
ttl: 10 * 1000
}
},
layerOrder: [
AvailableCacheLayer.MEMORY
]
})
Things to know:
Memoize a function
NOTE: You can only memoize async
function since call to cache can be asynchronous
WARNING: If your function returns undefined
, it will not be cached, prefer using null
.
You currently have two API to memoize a function:
- if you are in Typescript, you can use the
@Memoize
decorator
import { Memoize } from '@rlvt/cache-cache'
@Memoize({
layerConfigs: {
[AvailableCacheLayer.MEMORY]: {
ttl: 10 * 1000
}
},
layerOrder: [
AvailableCacheLayer.MEMORY
]
})
const expensiveFunction = async () => {
return {}
}
import { Memoize } from '@rlvt/cache-cache'
const expensiveFunction = async () => {
return {}
}
const noMoreExpensive = getMemoize(expensiveFunction, {
layerConfigs: {
[AvailableCacheLayer.MEMORY]: {
ttl: 10 * 1000
}
},
layerOrder: [
AvailableCacheLayer.MEMORY
]
})
Simple key/value cache
You can use the getStore
API to get a cache that you can use with simple get
/set
functions:
import { getStore } from '@rlvt/cache-cache'
const store = getStore({
layerConfigs: {
[AvailableCacheLayer.MEMORY]: {
ttl: 10 * 1000
}
},
layerOrder: [
AvailableCacheLayer.MEMORY
]
})
await store.set('key', { value: 1 })
const obj = await store.get('key')
await store.clear('key')
const obj = await store.get('key')
Typescript typing
Since the data stored inside the cache isn't typed, by default you will only get raw object as type when fetching from cache.
We implement our API to accept a generic argument that is the type of the memoized function:
import { Memoize } from '@rlvt/cache-cache'
type Person = {
age: number
name: number
}
@Memoize<typeof expensiveFunction>()
const expensiveFunction = async (): Person => {
return {}
}
Full configuration reference
enum AvailableCacheLayer {
MEMORY = 'MEMORY',
REDIS = 'REDIS'
}
type Config = {
layerConfigs: {
[AvailableCacheLayer.REDIS]?: {
ttl: number,
ttlMultiplier?: number
timeout?: number
shallowErrors?: boolean
redisClient: IORedis.Redis,
prefix?: string,
hashmap?: boolean = false
}
[AvailableCacheLayer.MEMORY]?: {
ttl: number,
ttlMultiplier?: number
timeout?: number
shallowErrors?: boolean
maxEntries?: number
}
}
layerOrder: AvailableCacheLayer[]
}
License
Apache-2.0