Security News
PyPI Introduces Digital Attestations to Strengthen Python Package Security
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
memoizerific
Advanced tools
Fast, small, efficient JavaScript memoization lib to memoize JS functions
memoizerific is a JavaScript library that provides a simple and efficient way to memoize functions. Memoization is a technique used to speed up function execution by caching the results of expensive function calls and returning the cached result when the same inputs occur again.
Basic Memoization
This feature allows you to memoize a function with a specified cache size. In this example, the function adds two numbers and caches the result for up to 100 different input combinations.
const memoizerific = require('memoizerific');
const memoizedFunction = memoizerific(100)((a, b) => a + b);
console.log(memoizedFunction(1, 2)); // 3
console.log(memoizedFunction(1, 2)); // 3 (cached result)
Cache Size Limitation
This feature demonstrates how the cache size limitation works. When the cache exceeds the specified size, the oldest cached result is evicted.
const memoizerific = require('memoizerific');
const memoizedFunction = memoizerific(2)((a, b) => a + b);
memoizedFunction(1, 2); // Cache: {(1, 2): 3}
memoizedFunction(2, 3); // Cache: {(1, 2): 3, (2, 3): 5}
memoizedFunction(3, 4); // Cache: {(2, 3): 5, (3, 4): 7} (1, 2) is evicted
Function Argument Handling
This feature shows how memoizerific can handle functions with variable numbers of arguments. The function sums all its arguments and caches the result.
const memoizerific = require('memoizerific');
const memoizedFunction = memoizerific(100)((...args) => args.reduce((sum, val) => sum + val, 0));
console.log(memoizedFunction(1, 2, 3)); // 6
console.log(memoizedFunction(1, 2, 3)); // 6 (cached result)
lodash.memoize is a function from the Lodash library that provides similar memoization capabilities. It allows you to memoize functions and customize the cache key resolver. Compared to memoizerific, lodash.memoize is part of a larger utility library and may offer more flexibility in terms of cache key customization.
fast-memoize is a lightweight and fast memoization library. It focuses on performance and simplicity, making it a good alternative to memoizerific for scenarios where speed is critical. However, it may not offer as many features as memoizerific, such as cache size limitation.
memoizee is a comprehensive memoization library that offers a wide range of features, including cache size limitation, cache expiration, and support for various cache storage mechanisms. It is more feature-rich compared to memoizerific but may be more complex to use.
Fast (see benchmarks), small (1k min/gzip), efficient, JavaScript memoization lib to memoize JS functions.
Uses JavaScript's Map() object for instant lookups, or a performant polyfill if Map is not available - does not do expensive serialization or string manipulation.
Supports multiple complex arguments. Includes least-recently-used (LRU) caching to maintain only the most recent specified number of results.
Compatible with the browser and nodejs.
Memoization is the process of caching function results so that they may be returned cheaply without re-execution if the function is called again using the same arguments. This is especially useful with the rise of [redux] (https://github.com/rackt/redux), and the push to calculate all derived data on the fly instead of maintaining it in state.
NPM:
npm install memoizerific --save
Or use one of the compiled distributions compatible in any environment (UMD):
const memoizerific = require('memoizerific');
// memoize the 50 most recent argument combinations of our function
const memoized = memoizerific(50)(function(arg1, arg2, arg3) {
// many long expensive calls here
});
memoized(1, 2, 3); // that took long to process
memoized(1, 2, 3); // this one was instant!
memoized(2, 3, 4); // expensive again :(
memoized(2, 3, 4); // this one was cheap!
Or with complex arguments:
const
complexArg1 = { a: { b: { c: 99 }}}, // hairy nested object
complexArg2 = [{ z: 1}, { q: [{ x: 3 }]}], // objects within arrays within arrays
complexArg3 = new Set(); // new Set object
memoized(complexArg1, complexArg2, complexArg3); // slow
memoized(complexArg1, complexArg2, complexArg3); // instant!
There are two required arguments:
limit (required):
the max number of items to cache before the least recently used items are removed.
fn (required):
the function to memoize.
The arguments are specified like this:
memoizerific(limit)(fn);
Examples:
// memoize 1 argument combination
memoizerific(1)(function(arg1, arg2){});
// memoize the last 10,000 unique argument combinations
memoizerific(10000)(function(arg1, arg2){});
// memoize infinity results (not recommended)
memoizerific(0)(function(arg1){});
The cache works using LRU logic, purging the least recently used results when the limit is reached. For example:
// memoize 1 result
const myMemoized = memoizerific(1)(function(arg1) {});
myMemoized('a'); // function runs, result is cached
myMemoized('a'); // cached result is returned
myMemoized('b'); // function runs again, new result is cached, old cached result is purged
myMemoized('b'); // cached result is returned
myMemoized('a'); // function runs again
Arguments are compared using strict equality, while taking into account small edge cases like NaN !== NaN (NaN is a valid argument type). A complex object will only trigger a cache hit if it refers to the exact same object in memory, not just another object that has similar properties. For example, the following code will not produce a cache hit even though the objects look the same:
const myMemoized = memoizerific(1)(function(arg1) {});
myMemoized({ a: true });
myMemoized({ a: true }); // not cached, the two objects are different instances even though they look the same
This is because a new object is being created on each invocation, rather than the same object being passed in.
A common scenario where this may appear is when providing options to functions, such as: do(opts)
, where opts
is an object.
Typically this would be called with an inline object like this: do({prop1: 10000, prop2: 'abc'})
.
If that function were memoized, it would not hit the cache because the opts
object would be newly created each time.
There are several ways around this:
Store constant arguments separately for use later on:
const do = memoizerific(1)(function(opts) {
// function body
});
// store the argument object
const opts = { prop1: 10000, prop2: 'abc' };
do(opts);
do(opts); // cache hit
Destructure the object and memoize its simple properties (strings, numbers, etc) using a wrapper function:
// it doesn't matter that a new object is being created internally because the simple values in the wrapping function are memoized
const callDo = memoizerific(1)(function(prop1, prop2) {
return do({prop1, prop2 });
});
callDo(1000, 'abc');
callDo(1000, 'abc'); // cache hit
Meta properties are available for introspection for debugging and informational purposes. They should not be manipulated directly, only read. The following properties are available:
memoizedFn.limit : The cache limit that was passed in. This will never change.
memoizedFn.wasMemoized : Returns true if the last invocation was a cache hit, otherwise false.
memoizedFn.cache : The cache object that stores all the memoized results.
memoizedFn.lru : The lru object that stores the most recent arguments called.
For example:
const callDo = memoizerific(1)(function(prop1, prop2) {
return do({prop1, prop2 });
});
callDo(1000, 'abc');
console.log(callDo.wasMemoized); // false
callDo(1000, 'abc');
console.log(callDo.wasMemoized); // true
There are many memoization libs available for JavaScript. Some of them have specialized use-cases, such as memoizing file-system access, or server async requests. While others, such as this one, tackle the more general case of memoizing standard synchronous functions. Some criteria to look for:
Two libs with traction that meet the criteria are:
:heavy_check_mark: Memoizee (@medikoo)
:heavy_check_mark: LRU-Memoize (@erikras)
Benchmarks were performed with complex data. Example arguments look like:
myMemoized(
{ a: 1, b: [{ c: 2, d: { e: 3 }}] }, // 1st argument
[{ x: 'x', q: 'q', }, { b: 8, c: 9 }, { b: 2, c: [{x: 5, y: 3}, {x: 2, y: 7}] }, { b: 8, c: 9 }, { b: 8, c: 9 }], // 2nd argument
{ z: 'z' }, // 3rd argument
... // 4th, 5th... argument
);
Testing involves calling the memoized functions thousands times using varying numbers of arguments (between 2-8) and with varying amounts of data repetition (more repetion means more cache hits and vice versa).
Following are measurements from 5000 iterations of each combination of number of arguments and variance on firefox 44:
Cache Size | Num Args | Approx. Cache Hits (variance) | LRU-Memoize | Memoizee | Memoizerific | % Faster |
---|---|---|---|---|---|---|
10 | 2 | 99% | 19ms | 31ms | 10ms | 90% |
10 | 2 | 62% | 212ms | 319ms | 172ms | 23% |
10 | 2 | 7% | 579ms | 617ms | 518ms | 12% |
100 | 2 | 99% | 137ms | 37ms | 20ms | 85% |
100 | 2 | 69% | 696ms | 245ms | 161ms | 52% |
100 | 2 | 10% | 1,057ms | 649ms | 527ms | 23% |
500 | 4 | 95% | 476ms | 67ms | 62ms | 8% |
500 | 4 | 36% | 2,642ms | 703ms | 594ms | 18% |
500 | 4 | 11% | 3,619ms | 880ms | 725ms | 21% |
1000 | 8 | 95% | 1,009ms | 52ms | 65ms | 25% |
1000 | 8 | 14% | 10,477ms | 659ms | 635ms | 4% |
1000 | 8 | 1% | 6,943ms | 1,501ms | 1,466ms | 2% |
Cache Size : The maximum number of results to cache.
Num Args : The number of arguments the memoized function accepts, ex. fn(arg1, arg2, arg3) is 3.
Approx. Cache Hits (variance) : How varied the passed in arguments are. If the exact same arguments are always used, the cache would be hit 100% of the time. If the same arguments are never used, the cache would be hit 0% of the time.
% Faster : How much faster the 1st best performer was from the 2nd best performer (not against the worst performer).
LRU-Memoize performed well with few arguments and lots of cache hits, but degraded quickly as the parameters became less favorable. At 4+ arguments it was up to 20x slower, enough to cause material concern.
Memoizee performed reliably with good speed.
Memoizerific was fastest by about 30% with predictable decreases in performance as tests became more challenging. It is built for real-world production use.
Released under an MIT license.
Like it, star it.
FAQs
Fast, small, efficient JavaScript memoization lib to memoize JS functions
The npm package memoizerific receives a total of 1,111,402 weekly downloads. As such, memoizerific popularity was classified as popular.
We found that memoizerific demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now supports digital attestations, enhancing security and trust by allowing package maintainers to verify the authenticity of Python packages.
Security News
GitHub removed 27 malicious pull requests attempting to inject harmful code across multiple open source repositories, in another round of low-effort attacks.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.