@graphile/lru
You probably want lru-cache
instead.
This is an obsessively optimized LRU cache for Node.js, it forgoes features in
favour of performance, and is very marginally faster than node-lru-cache v6 in
certain circumstances (namely those that we care about inside the Graphile
internals). A performance comparison versus node-lru-cache v7 has not yet been
performed.
Usage
import LRU from "@graphile/lru";
const lru = new LRU({
maxLength: 500,
dispose(key, value) {
console.log(`Disposing of key '${key}' with value '${inspect(value)}'`);
},
});
const ANY_KEY_HERE = { foo: "bar" };
const ANY_VALUE_HERE = { randomNumber: () => 4 };
lru.set(ANY_KEY_HERE, ANY_VALUE_HERE);
const value = lru.get(ANY_KEY_HERE);
lru.reset();
Considering a pull request?
Pull requests that add features that aren't required by the Graphile suite are
unlikely to be entertained - use node-lru-cache instead!
Pull requests that improve performance should come with benchmarks, benchmark
scripts and explanation of technique. And if you pull it off without breaking
anything - YES PLEASE.
Pull requests that add documentation are welcome.