What is @apollo/utils.keyvaluecache?
@apollo/utils.keyvaluecache is a utility package provided by Apollo that offers a simple and efficient key-value caching mechanism. It is designed to be used in conjunction with Apollo Server and other Apollo tools to improve performance by caching frequently accessed data.
What are @apollo/utils.keyvaluecache's main functionalities?
Basic Key-Value Storage
This feature allows you to store and retrieve key-value pairs in an in-memory cache. The example demonstrates setting a value for a key and then retrieving it.
const { InMemoryLRUCache } = require('@apollo/utils.keyvaluecache');
const cache = new InMemoryLRUCache();
async function cacheData() {
await cache.set('key', 'value');
const value = await cache.get('key');
console.log(value); // Outputs: 'value'
}
cacheData();
Expiration Time
This feature allows you to set a time-to-live (TTL) for cached items. The example demonstrates setting a value with a TTL of 1 second and then attempting to retrieve it after 2 seconds, resulting in a null value.
const { InMemoryLRUCache } = require('@apollo/utils.keyvaluecache');
const cache = new InMemoryLRUCache();
async function cacheDataWithTTL() {
await cache.set('key', 'value', { ttl: 1 }); // TTL in seconds
setTimeout(async () => {
const value = await cache.get('key');
console.log(value); // Outputs: null
}, 2000);
}
cacheDataWithTTL();
Deleting Cache Entries
This feature allows you to delete specific entries from the cache. The example demonstrates setting a value, deleting it, and then attempting to retrieve it, resulting in a null value.
const { InMemoryLRUCache } = require('@apollo/utils.keyvaluecache');
const cache = new InMemoryLRUCache();
async function deleteCacheEntry() {
await cache.set('key', 'value');
await cache.delete('key');
const value = await cache.get('key');
console.log(value); // Outputs: null
}
deleteCacheEntry();
Other packages similar to @apollo/utils.keyvaluecache
node-cache
node-cache is a simple and efficient in-memory caching module for Node.js. It provides similar functionalities such as setting, getting, and deleting cache entries, as well as setting TTL for cached items. Compared to @apollo/utils.keyvaluecache, node-cache is more general-purpose and not specifically tied to Apollo's ecosystem.
lru-cache
lru-cache is a highly efficient Least Recently Used (LRU) cache implementation for Node.js. It offers similar features like setting, getting, and deleting cache entries, and supports TTL. lru-cache is known for its performance and is widely used in various applications, making it a strong alternative to @apollo/utils.keyvaluecache.
memory-cache
memory-cache is another in-memory caching solution for Node.js. It provides basic caching functionalities such as setting, getting, and deleting cache entries, along with TTL support. memory-cache is lightweight and easy to use, making it a good alternative for simple caching needs outside of the Apollo ecosystem.
KeyValueCache interface
export interface KeyValueCache<
V = string,
SO extends KeyValueCacheSetOptions = KeyValueCacheSetOptions,
> {
get(key: string): Promise<V | undefined>;
set(key: string, value: V, options?: SO): Promise<void>;
delete(key: string): Promise<boolean | void>;
}
This interface defines a minimally-compatible cache intended for (but not limited to) use by Apollo Server. It is notably implemented by KeyvAdapter
from the @apollo/utils.keyvadapter
package. (KeyvAdapter
in conjunction with a Keyv
is probably more interesting to you unless you're actually building a cache!)
InMemoryLRUCache
This class wraps lru-cache
and implements the KeyValueCache
interface. It accepts LRUCache.Options
as the constructor argument and passes them to the LRUCache
which is created. A default maxSize
and sizeCalculator
are provided in order to prevent an unbounded cache; these can both be tweaked via the constructor argument.
const cache = new InMemoryLRUCache({
maxSize: Math.pow(2, 20) * 50,
});
PrefixingKeyValueCache
This class wraps a KeyValueCache
in order to provide a specified prefix for keys entering the cache via this wrapper.
const cache = new InMemoryLRUCache();
const prefixedCache = new PrefixingKeyValueCache(cache, "apollo:");
One reason to use this is if a single piece of software wants to use a cache for multiple features. For example, you can pass a KeyValueCache
as the cache
option to @apollo/server
's ApolloServer
class; it provides this cache to plugins and other features as a default cache to use (if the user does not provide the specific plugin its own cache). Each feature uses PrefixingKeyValueCache
with a different prefix to prevent different features from stomping on each others' data.
However, if you are configuring one of those features explicitly, you may not want this prefix to be added. In that case, you can wrap your cache in a cache returned by PrefixingKeyValueCache.cacheDangerouslyDoesNotNeedPrefixesForIsolation
. The only difference between this cache and the cache that it wraps is that when it is passed directly to a PrefixingKeyValueCache
, no prefix is applied.
That is, let's say you are using a class that is implemented like this:
class SomePlugin {
private cache: KeyValueCache;
constructor(cache: KeyValueCache) {
this.cache = new PrefixingKeyValueCache(cache, "some:");
}
}
If you set up your plugin as new SomePlugin({ cache: myRedisCache })
then the plugin will add some:
to all keys when interacting with your cache, but if you set it up as new SomePlugin({ cache: PrefixingKeyValueCache.cacheDangerouslyDoesNotNeedPrefixesForIsolation(myRedisCache) })
, then the plugin will not apply its prefix. You should only do this if you feel confident that this feature's use of this cache will not overlap with another feature: perhaps this is the only feature you have configured to use this cache, or perhaps the feature provides suitable control over cache keys that you can ensure isolation without needing the plugin's prefix.
Software like ApolloServer
that passes a single KeyValueCache
to several features should throw if a PrefixesAreUnnecessaryForIsolationCache
is provided to it; it can check this condition with the static PrefixingKeyValueCache.prefixesAreUnnecessaryForIsolation
method (which is safer than an instanceof
check in case there are multiple copies of @apollo/utils.keyvaluecache
installed).
ErrorsAreMissesCache
This class wraps a KeyValueCache
in order to provide error tolerance for caches which connect via a client like Redis. In the event that there's an error, this wrapper will treat it as a cache miss (and log the error instead, if a logger
is provided).
An example usage (which makes use of the keyv
Redis client and our KeyvAdapter
) would look something like this:
import Keyv from "keyv";
import { KeyvAdapter } from "@apollo/utils.keyvadapter";
import { ErrorsAreMissesCache } from "@apollo/utils.keyvaluecache";
const redisCache = new Keyv("redis://user:pass@localhost:6379");
const faultTolerantCache = new ErrorsAreMissesCache(
new KeyvAdapter(redisCache),
);