node-shared-cache
Interprocess shared memory cache for Node.JS
It supports auto memory-management and fast object serialization. It uses a hashmap and LRU cache internally to maintain its contents.
Updates
Install
Install node-gyp
first if you do not have it installed:
sudo npm install node-gyp -g
Then
npm install kyriosli/node-shared-cache
Terms of Use
This software (source code and its binary builds) is absolutely copy free and any download or modification is permitted except for unprohibited
commercial use.
But due to the complexity of this software, any bugs or runtime exceptions could happen when programs which includeed it run into an unexpected
situation, which in most cases should be harmless but also have the chance to cause:
- program crash
- system down
- software damage
- hardware damage
which would lead to data corruption or even economic losses.
So when you are using this software, DO
- check the data
- double check the data
- avoid undefined behavior to happen
To avoid data crupption, we use a read-write lock to ensure that data modification is exclusive. But when a program is writting data when something
bad, for example, a SIGKILL, happens that crashes the program before the write operation is complete and lock is released, other processes may not be
able to enter the exclusive region again. I do not use an auto recovery lock such as flock
, which will automatically release when process exits, just
in case that wrong data is returned when performing a reading operation, or even, causing a segment fault.
usage
var cache = require('node-shared-cache');
var obj = new cache.Cache("test", 557056);
obj.foo = "bar";
console.log(obj.foo);
for(var k in obj);
Object.keys(obj);
delete obj.foo;
obj.foo = {'foo': 'bar'};
var test = obj.foo = {'foo': 'bar'};
test === obj.foo;
test.self = test;
obj.foo = test;
test = obj.foo;
test.self === test;
cache.release("test");
cache.increase(obj, "foo");
cache.increase(obj, "foo", 3);
var values = cache.dump(obj);
values = cache.dump(obj, "foo_");
class Cache
constructor
function Cache(name, size, optional block_size)
name
represents a file name in shared memory, size
represents memory size in bytes to be used. block_size
denotes the size of the unit of the memory block.
block_size
can be any of:
- cache.SIZE_64 (6): 64 bytes (default)
- cache.SIZE_128 (7): 128 bytes
- cache.SIZE_256 (8): 256 bytes
- cache.SIZE_512 (9): 512 bytes
- cache.SIZE_1K (10): 1KB
- cache.SIZE_2K (11): 2KB
- ...
- cache.SIZE_16K (14): 16KB
Note that:
size
should not be smaller than 524288 (512KB)- block count is 32-aligned
- key length should not be greater than
(block_size - 32) / 2
, for example, when block size is 64 bytes, maximum key length is 16 chars. - key length should also not be greater than 256
So when block_size is set to default, the maximum memory size that can be used is 128M, and the maximum keys that can be stored is 2088960 (8192 blocks is used for data structure)
property setter
set(name, value)
exported methods
release
function release(name)
The shared memory named name
will be released. Throws error if shared memory is not found. Note that this method simply calls shm_unlink
and does not check whether the memory region is really initiated by this module.
Don't call this method when the cache is still used by some process, may cause memory leak
clear
function clear(instance)
Clears a cache
increase
function increase(instance, name, optional increase_by)
Increase a key in the cache by an integer (default to 1). If the key is absent, or not an integer, the key will be set to increase_by
.
dump
function dump(instance, optional prefix)
Dump keys and values
Performance
Tests are run under a virtual machine with one processor:
$ node -v
v0.12.4
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
stepping : 7
microcode : 0x70d
cpu MHz : 2300.090
cache size : 15360 KB
...
Block size is set to 64 and 1MB of memory is used.
Setting property
When setting property 100w times:
var plain = {};
console.time('plain obj');
for(var i = 0; i < 1000000; i++) {
plain['test' + (i & 127)] = i;
}
console.timeEnd('plain obj');
var obj = new binding.Cache("test", 1048576);
console.time('shared cache');
for(var i = 0; i < 1000000; i++) {
obj['test' + (i & 127)] = i;
}
console.timeEnd('shared cache');
The result is:
plain obj: 227ms
shared cache: 492ms (1:2.17)
Getting property
When trying to read existing key:
console.time('read plain obj');
for(var i = 0; i < 1000000; i++) {
plain['test' + (i & 127)];
}
console.timeEnd('read plain obj');
console.time('read shared cache');
for(var i = 0; i < 1000000; i++) {
obj['test' + (i & 127)];
}
console.timeEnd('read shared cache');
The result is:
read plain obj: 138ms
read shared cache: 524ms (1:3.80)
When trying to read keys that are not existed:
console.time('read plain obj with key absent');
for(var i = 0; i < 1000000; i++) {
plain['oops' + (i & 127)];
}
console.timeEnd('read plain obj with key absent');
console.time('read shared cache with key absent');
for(var i = 0; i < 1000000; i++) {
obj['oops' + (i & 127)];
}
console.timeEnd('read shared cache with key absent');
The result is:
read plain obj with key absent: 265ms
read shared cache with key absent: 595ms (1:2.24)
Enumerating properties
When enumerating all the keys:
console.time('enumerate plain obj');
for(var i = 0; i < 100000; i++) {
Object.keys(plain);
}
console.timeEnd('enumerate plain obj');
console.time('enumerate shared cache');
for(var i = 0; i < 100000; i++) {
Object.keys(obj);
}
console.timeEnd('enumerate shared cache');
The result is:
enumerate plain obj: 1201ms
enumerate shared cache: 4262ms (1:3.55)
Warn: Because the shared memory can be modified at any time even the current Node.js
process is running, depending on keys enumeration result to determine whether a key
is cached is unwise. On the other hand, it takes so long a time to build strings from
memory slice, as well as putting them into an array, so DO NOT USE IT unless you know
that what you are doing.
Object serialization
We choose a c-style binary serialization method rather than JSON.stringify
, in two
concepts:
- Performance serializing and unserializing
- Support for circular reference
Tests code list:
var input = {env: process.env, arr: [process.env, process.env]};
console.time('JSON.stringify');
for(var i = 0; i < 100000; i++) {
JSON.stringify(input);
}
console.timeEnd('JSON.stringify');
console.time('binary serialization');
for(var i = 0; i < 100000; i++) {
obj.test = input;
}
console.timeEnd('binary serialization');
input = JSON.stringify(input);
console.time('JSON.parse');
for(var i = 0; i < 100000; i++) {
JSON.parse(input);
}
console.timeEnd('JSON.parse');
console.time('binary unserialization');
for(var i = 0; i < 100000; i++) {
obj.test;
}
console.timeEnd('binary unserialization');
The result is:
JSON.stringify: 5876ms
binary serialization: 2523ms (2.33:1)
JSON.parse: 2042ms
binary unserialization: 2098ms (1:1.03)
TODO