Socket
Socket
Sign inDemoInstall

lmdb

Package Overview
Dependencies
Maintainers
3
Versions
174
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

lmdb - npm Package Compare versions

Comparing version 0.2.0 to 1.5.4

assets/powers-dre.png

67

package.json
{
"name": "lmdb",
"description": "A Low-level, LevelDOWN-compatible, Node.js LMDB binding",
"contributors": [
"Rod Vagg <r@va.gg> (https://github.com/rvagg)"
],
"author": "Kris Zyp",
"version": "1.5.4",
"description": "Simple, efficient, scalable data store wrapper for LMDB",
"license": "MIT",
"repository": {
"type": "git",
"url": "http://github.com/DoctorEvidence/lmdb-store"
},
"keywords": [
"lmdb",
"database",
"mdb",
"level",
"leveldown"
"lightning"
],
"version": "0.2.0",
"main": "lmdb.js",
"dependencies": {
"bindings": "~1.1.0",
"nan": "~0.3.1"
"type": "commonjs",
"main": "./index.js",
"exports": {
"import": "./index.mjs",
"require": "./index.js"
},
"devDependencies": {
"tap": "*",
"rimraf": "*",
"abstract-leveldown": "*"
"types": "./index.d.ts",
"tsd": {
"directory": "test/types"
},
"repository": {
"type": "git",
"url": "https://github.com/rvagg/lmdb.git"
},
"scripts": {
"test": "tap ./test.js test/*-test.js --stderr"
"install": "node-gyp-build",
"recompile": "node-gyp configure && node-gyp build",
"test": "mocha test/**.test.js --recursive && npm run test:types",
"test2": "mocha tests -u tdd",
"test:types": "tsd",
"benchmark": "node --turbo-fast-api-calls ./benchmark/index.js",
"benchmark-ll": "node ./benchmark/low-level.js"
},
"license": "MIT"
"gypfile": true,
"dependencies": {
"mkdirp": "^1.0.4",
"nan": "^2.14.2",
"node-gyp-build": "^4.2.3",
"weak-lru-cache": "^0.4.1"
},
"optionalDependencies": {
"msgpackr": "^1.3.2"
},
"devDependencies": {
"@types/node": "latest",
"benchmark": "^2.1.4",
"chai": "^4.3.4",
"fs-extra": "^9.0.1",
"jshint": "^2.12.0",
"mocha": "^8.3.2",
"node-gyp": "^8.0.0",
"rimraf": "^3.0.2",
"tsd": "^0.14.0"
}
}

@@ -1,203 +0,388 @@

LMDB for Node.js
================
# lmdb-store
[![license](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)
[![npm version](https://img.shields.io/npm/v/lmdb-store.svg?style=flat-square)](https://www.npmjs.org/package/lmdb-store)
[![get](https://img.shields.io/badge/get-4.5%20MOPS-yellow)](README.md)
[![put](https://img.shields.io/badge/put-1.7%20MOPS-yellow)](README.md)
**A Low-level, [LevelDOWN](https://github.com/rvagg/node-leveldown)-compatible, Node.js LMDB binding**
`lmdb-store` is an ultra-fast interface to LMDB; probably the fastest and most efficient NodeJS key-value/database interface that exists for full storage and retrieval of structured JS data (objects, arrays, etc.) in a true persisted, scalable, [ACID compliant](https://en.wikipedia.org/wiki/ACID) database. It provides a simple interface for interacting with LMDB, as a key-value store, that makes it easy to fully leverage the power, crash-proof design, and efficiency of LMDB using intuitive JavaScript, and is designed to scale across multiple processes or threads. `lmdb-store` offers several key features that make it idiomatic, highly performant, and easy to use LMDB efficiently:
* High-performance translation of JS values and data structures to/from binary key/value data
* Queueing asynchronous off-thread write operations with promise-based API
* Simple transaction management
* Iterable queries/cursors
* Automated database growth
* Record versioning and optimistic locking for scalability/concurrency
* Optional native off-main-thread compression with high-performance LZ4 compression
* And ridiculously fast and efficient:
[![NPM](https://nodei.co/npm/lmdb.png?stars&downloads)](https://nodei.co/npm/lmdb/) [![NPM](https://nodei.co/npm-dl/lmdb.png)](https://nodei.co/npm/lmdb/)
Benchmarking on Node 14.9, with 3.4Ghz i7-4770 Windows, a `get` operation, using JS numbers as a key, retrieving data from the database (random access), and decoding the data into a structured object with 10 properties (using default [MessagePack encoding](https://github.com/kriszyp/msgpackr)), can be done in less than one microsecond, or a little over a 1,400,000/sec on a single thread. This is almost twice as fast as a single native `JSON.parse` call with the same object without any DB interaction! LMDB scales effortlessly across multiple processes or threads; over 4,500,000 operations/sec on the same 4/8 core computer by running across multiple threads. By running writes on a separate transactional thread, writing is extremely fast as well. With encoding the same objects, full encoding and writes can be performed at about 500,000 puts/second or 1,700,000 puts/second on multiple threads.
"LMDB" is *[Symas Lightning Memory-Mapped Database](http://symas.com/mdb/)*.
## Design
> LMDB is an ultra-fast, ultra-compact key-value embedded data store developed by Symas for the OpenLDAP Project. It uses memory-mapped files, so it has the read performance of a pure in-memory database while still offering the persistence of standard disk-based databases, and is only limited to the size of the virtual address space, (it is not limited to the size of physical RAM). Note: LMDB was originally called MDB, but was renamed to avoid confusion with other software associated with the name MDB
`lmdb-store` handles translation of JavaScript values, primitives, arrays, and objects, to and from the binary storage of LMDB keys and values with highly optimized code using native C++ code for breakneck performance. It supports multiple types of JS values for keys and values, making it easy to use idiomatic JS for storing and retrieving data.
LMDB for Node.js, is primarily designed to serve as a back-end to **[LevelUP](https://github.com/rvagg/node-levelup)**, it is strongly recommended that you use LevelUP in preference to this library directly.
`lmdb-store` is designed for synchronous reads, and asynchronous writes. In idiomatic NodeJS code, I/O operations are performed asynchronously. `lmdb-store` observes this design pattern; because LMDB is a memory-mapped database, reading and writing within a transaction does not use any I/O (other than the slight possibility of a page fault), and can almost always be performed faster than Node's event queue callbacks can even execute, and it is easier to write code for instant synchronous values from reads. On the otherhand, in default mode with sync'ed/flushed transactions, commiting transactions does involve I/O, and furthermore can achieve vastly higher throughput by batching operations. The entire transaction of a batch operation can be performed in a separate thread. Consequently, `lmdb-store` is designed for transactions to go through this asynchronous batching process and return a simple promise that resolves once data is written and flushed to disk.
<a name="platforms"></a>
Tested & supported platforms
----------------------------
With the default sync'ing functionality, LMDB has a crash-proof design; a machine can be turned off at any point, and data can not be corrupted unless the written data is actually changed or tampered. Writing data and waiting for confirmation that has been writted to the physical medium is critical for data integrity, but is well known to have latency (although not necessarily less efficient). However, by batching writes, when a database is under load, slower transactions enable more writes per transaction, and lmdb-store is able to drive LMDB to achieve the maximum levels of throughput with fully sync'ed operations, preserving both the durability/safety of the transactions and legendary performance.
* **Linux**
* *Others... testing soon*
`lmdb-store` supports and encourages the use of conditional writes; this allows for atomic operations that are dependendent on previously read data, and most transactional types of operations can be written with an optimistic-locking based, atomic-conditional-write pattern. This allows `lmdb-store` to delegate writes to off-thread execution, and scale to handle concurrent execution across many processes or threads while maintaining data integrity.
<a name="api"></a>
## API
When an `lmdb-store` automatically handles automatically database growth, expanding file size with a smart heuristic that minimizes file fragmentation (as you would expect from a database)..
* <a href="#ctor"><code><b>lmdb()</b></code></a>
* <a href="#lmdb_open"><code><b>lmdb#open()</b></code></a>
* <a href="#lmdb_close"><code><b>lmdb#close()</b></code></a>
* <a href="#lmdb_put"><code><b>lmdb#put()</b></code></a>
* <a href="#lmdb_get"><code><b>lmdb#get()</b></code></a>
* <a href="#lmdb_del"><code><b>lmdb#del()</b></code></a>
* <a href="#lmdb_batch"><code><b>lmdb#batch()</b></code></a>
* <a href="#lmdb_approximateSize"><code><b>lmdb#approximateSize()</b></code></a>
* <a href="#lmdb_getProperty"><code><b>lmdb#getProperty()</b></code></a>
* <a href="#lmdb_iterator"><code><b>lmdb#iterator()</b></code></a>
* <a href="#iterator_next"><code><b>iterator#next()</b></code></a>
* <a href="#iterator_end"><code><b>iterator#end()</b></code></a>
* <a href="#lmdb_destroy"><code><b>lmdb.destroy()</b></code></a>
* <a href="#lmdb_repair"><code><b>lmdb.repair()</b></code></a>
`lmdb-store` provides optional compression using LZ4 that works in conjunction with the asynchronous writes by performing the compression in the same thread (off the main thread) that performs the writes in a transaction. LZ4 is extremely fast, and decompression can be performed at roughly 5GB/s, so excellent storage efficiency can be achieved with almost negligible performance impact.
`lmdb-store` is built on the excellent [node-lmdb](https://github.com/Venemo/node-lmdb) package.
--------------------------------------------------------
<a name="ctor"></a>
### lmdb(location)
<code>lmdb()</code> returns a new Node.js **LMDB** instance. `location` is a String pointing to the LMDB store to be opened or created.
## Usage
An lmdb-store instances is created with by using `open` export from the main module:
```
const { open } = require('lmdb-store');
// or
// import { open } from 'lmdb-store';
let myStore = open({
path: 'my-store',
// any options go here, we can turn on compression like this:
compression: true,
});
await myStore.put('greeting', { someText: 'Hello, World!' });
myStore.get('greeting').someText // 'Hello, World!'
// or
myStore.transactionAsync(() => {
myStore.put('greeting', { someText: 'Hello, World!' });
myStore.get('greeting').someText // 'Hello, World!'
});
```
(see store options below for more options)
Once you have created a store, you can store and retrieve values using keys:
--------------------------------------------------------
<a name="lmdb_open"></a>
### lmdb#open([options, ]callback)
<code>open()</code> is an instance method on an existing database object.
### Keys
When using the various APIs, keys can be any JS primitive (string, number, boolean, symbol), an array of primitives, or a Buffer. These primitives are translated to binary keys used by LMDB in such a way that consistent ordering is preserved. Numbers are ordered naturally, which come before strings, which are ordered lexically. The keys are stored with type information preserved. The `getRange`operations that return a set of entries will return entries with the original JS primitive values for the keys. If arrays are used as keys, they are ordering by first value in the array, with each subsequent element being a tie-breaker. Numbers are stored as doubles, with reversal of sign bit for proper ordering plus type information, so any JS number can be used as a key. For example, here are the order of some different keys:
```
Symbol.for('even symbols')
-10 // negative supported
-1.1 // decimals supported
400
3E10
'Hello'
['Hello', 'World']
'World'
'hello'
['hello', 1, 'world']
['hello', 'world']
```
You can override the default encoding of keys, and cause keys to be returned as node buffers using the `keyIsBuffer` database option (generally slower).
The `callback` function will be called with no arguments when the database has been successfully opened, or with a single `error` argument if the open operation failed for any reason.
### Values
You can store a wide variety of JavaScript values and data structures in lmdb-store, including objects (with arbitrary complexity), arrays, buffers, strings, numbers, etc. in your database. Even full structural cloning (with cycles) is an optionally supported. Values are stored and retrieved according the database encoding, which can be set using the `encoding` property on the database options. By default, data is stored using MessagePack, but there are four supported encodings:
#### `options`
* `msgpack` (default) - All values are stored by serializing the value as MessagePack (using the [msgpackr](https://github.com/kriszyp/msgpackr) package). Values are decoded and parsed on retrieval, so `get` and `getRange` will return the object, array, or other value that you have stored. The msgpackr package is extremely fast (usually faster than native JSON), and provides the most flexibility in storing different value types. See the Shared Structures section for how to achieve maximum efficiency with this.
* `cbor` - This specifies all values use the CBOR format, which requires that the [cbor-x](https://github.com/kriszyp/cbor-x) package be installed. This package is based on [msgpackr](https://github.com/kriszyp/msgpackr) and supports all the same options.
* `json` - All values are stored by serializing the value as JSON (using JSON.stringify) and encoded with UTF-8. Values are decoded and parsed on retrieval using JSON.parse. Generally this does not perform as all as msgpack, nor support as many value types.
* `string` - All values should be strings and stored by encoding with UTF-8. Values are returned as strings from `get`.
* `binary` - Values are returned as (Node) buffer objects, representing the raw binary data. Note that creating buffer objects in NodeJS has some overhead and while this is fast and valuable direct storage of binary data, the data encodings provides faster and more optimized process for serializing and deserializing structured data.
* `ordered-binary` - Use the same encoding as the default encoding for keys, which serializes primitive values with consistent ordering. This is primarily useful in `dupSort` stores where data values are ordered, and having consistent key and value ordering is helpful.
The optional `options` argument may contain:
Once you have a store, the following methods are available:
* `'createIfMissing'` *(boolean, default: `true`)*: If `true`, will initialise an empty data store at the specified location if one doesn't already exist. If `false` and a database doesn't exist you will receive an error in your `open()` callback and your database won't open.
### `store.get(key): any`
This will retrieve the value at the specified key. The `key` must be a JS value/primitive as described above, and the return value will be the stored data (dependent on the encoding), or `undefined` if the entry does not exist.
* `'errorIfExists'` *(boolean, default: `false`)*: If `true`, you will receive an error in your `open()` callback if the database exists at the specified location.
### `store.getEntry(key): any`
This will retrieve the the entry at the specified key. The `key` must be a JS value/primitive as described above, and the return value will be the stored data (dependent on the encoding), or `undefined` if the entry does not exist. An entry is object with a `value` property for the value in the database, and a `version` property for the version number of the entry in the database (if `useVersions` is enabled for the database).
* `'mapSize'` *(boolean, default: `10485760` (10MB))*: Specify the size of the memory map, which is also the **maximum size of the database**. The value should be chosen as large as possible, to accommodate future growth of the database. The size may be changed by closing and reopening the environment. Any attempt to set a size smaller than the space already consumed by the environment will be silently changed to the current size of the used space. The size should be a multiple of the OS page size.
### `store.put(key, value, version?: number, ifVersion?: number): Promise<boolean>`
This will store the provided value/data at the specified key. If the database is using versioning (see options below), the `version` parameter will be used to set the version number of the entry. If the `ifVersion` parameter is set, the put will only occur if the existing entry at the provided key has the version specified by `ifVersion` at the instance the commit occurs (LMDB commits are atomic by default). If the `ifVersion` parameter is not set, the put will occur regardless of the previous value.
* `'maxReaders'` *(integer, default: `126`)*: Specify the number of slots in the readers table, which determines the maximum number of concurrent iterators. The reader table is stored in the lock file and occupies 64 bytes per slot. Iterators error with `MDB_READERS_FULL` when there are no free reader slots, so set this to allow space for as many concurrent iterators as you expect to use. The size may be changed by closing and reopening the environment.
This operation will be enqueued to be written in a batch transaction. Any other operations that occur within a certain timeframe (until next event after I/O by default) will also occur in the same transaction. This will return a promise for the completion of the put. The promise will resolve once the transaction has finished committing. The resolved value of the promise will be `true` if the `put` was successful, and `false` if the put did not occur due to the `ifVersion` not matching at the time of the commit. Once the promise resolves, the transaction will have been fully written to the physical storage medium (durable commit, guaranteed available in the future as far as the OS/physical storage can permit and confirm, even if there is power loss or system crash).
* `'sync'` *(boolean, default: `true`)*: By default, system buffers are flushed to disk after committing transactions (which are performed on every operation). Use this option to turn off this behaviour to speed up writes at the risk of losing writes upon system crash. Note that setting `'sync'` to `false` and `'writeMap'` to `true` leaves the system with no hint for when to write transactions to disk. Using `'mapAsync'` with `'writeMap'` may be preferable.
If this is performed inside a transaction, the put will be executed immediately in the current transaction.
* `'readOnly'` *(boolean, default: `false`)*: Open the environment in read-only mode. No write operations will be allowed. LMDB will still modify the lock file - except on read-only filesystems, where LMDB does not use locks.
### `store.remove(key, valueOrIfVersion?: number): Promise<boolean>`
This will delete the entry at the specified key. This functions like `put`, with the same optional conditional version. This is batched along with put operations, and returns a promise indicating the success of the operation. If you are using a database with duplicate entries per key (with `dupSort` flag), you can specify the value to remove as the second parameter (instead of a version).
* `'writeMap'` *(boolean, default: `false`)*: Use a writeable *memory map* (unless `'readOnly'` is set). This is faster and uses fewer `malloc` operations. Note that setting `'writeMap'` to `true` will involve the pre-allocation of the data store, as a single file, of `'mapSize'` size.
Again, if this is performed inside a transation, the removal will be performed in the current transaction.
* `'metaSync'` *(boolean, default: `true`)*: System buffers will be flushed to disk once per transaction (unless `'sync'` is set to `false`). Setting `'metaSync'` to `false` will prevent a metadata flush, deferring it until the system flushes files to disk, or the next (non-read-only) write. A `'metaSync'` set to `false` will improve performance and maintain database integrity, but a system crash may undo the last committed transaction.
### `store.transactionAsync(callback: Function): Promise`
This will run the provided callback in a transaction, asynchronously starting the transaction, then running the callback, then later committing the transaction. By running within a transaction, the code in the callback can perform multiple database operations atomically and isolated (fully [ACID compliant](https://en.wikipedia.org/wiki/ACID)). Any `put` or `remove` operations are immediately written to the transaction and can be immediately read afterwards (you can call `get()` or `getRange()` without awaiting for a returned promise) in the transaction.
* `'mapAsync'` *(boolean, default: `false`)*: When using a `'writeMap'`, use asynchronous flushes to disk. As with `'sync'` set to `false`, a system crash can then corrupt the database or lose the last transactions.
The callback function will be queued along with other `put` and `remove` operations, and run in the same transaction as other operations that have been queued in the current event turn, and will be executed in the order they were called. `transactionAsync` will return a promise that will resolve once its transaction has been committed. The promise will resolve to the value returned by the callback function.
For example:
```
let products = open(...);
// decrement count if above zero
function buyShoe() {
return products.transactionAsync(() => {
let shoe = products.get('shoe')
// this is performed atomically, so we can guarantee no other processes
// modify this entry before we write the new value
if (shoe.count > 0) {
shoe.count--
products.put('shoe', shoe)
return true // succeeded
}
return false // count is zero, no shoes to buy
})
}
```
--------------------------------------------------------
<a name="lmdb_close"></a>
### lmdb#close(callback)
<code>close()</code> is an instance method on an existing database object. The underlying LMDB database will be closed and the `callback` function will be called with no arguments if the operation is successful or with a single `error` argument if the operation failed for any reason.
Note that `store.transactionAsync(() => store.put(...))` is functionally equivalent to simply calling `store.put(...)`, queuing the put for asynchronously being committed in transaction, except that `put` executes the db write operation entirely in separate worker thread, whereas `transactionAsync` must also synchronize the callback function in the main JS thread to execute (so it is a little bit less efficient, although still quite fast).
Also, the callback function can be an async function (or return a promise), but this is not recommended. If the function returns a promise, this will delay/defer the commit until the callback's promise is resolved. However, while waiting for the callback to finish, other code may execute operations that would end up in the current transaction and may result in a surprising order of operations, and long running transactions are generally discouraged since they extend the single write lock.
--------------------------------------------------------
<a name="lmdb_put"></a>
### lmdb#put(key, value[, options], callback)
<code>put()</code> is an instance method on an existing database object, used to store new entries, or overwrite existing entries in the LMDB store.
### `store.childTransaction(callback: Function): Promise`
This will run the provided callback in a transaction much like `transactionAsync` except an explicit child transaction will be used specifically for this callback. This makes it possible for the operations to be aborted and rolled back. The callback may return the lmdb-store exported `ABORT` constant to abort the child transaction for this callback. Also, if the callback function throws an error (or returns a reject promise), this will also abort the child transaction. This childTransaction function is not available if caching or `useWritemap` is enabled.
The `key` and `value` objects may either be `String`s or Node.js `Buffer` objects and cannot be `undefined` or `null`. Other object types are converted to JavaScript `String`s with the `toString()` method and the resulting `String` *may not* be a zero-length. A richer set of data-types are catered for in LevelUP.
The `childTransaction` function can be executed on its own (to run the child transaction inside the next queued transaction), or it can be executed inside another transaction callback, executing the child transaction within the current transaction.
The `callback` function will be called with no arguments if the operation is successful or with a single `error` argument if the operation failed for any reason.
### `store.putSync(key, value, versionOrOptions?: number | PutOptions): boolean`
This will set the provided value at the specified key, but will do so synchronously. If this is called inside of a synchronous transaction, the put will be performed in the current transaction. If not, a transaction will be started, the put will be executed, the transaction will be committed, and then the function will return. We do not recommend this be used for any high-frequency operations as it can be vastly slower (often blocking the main JS thread for multiple milliseconds) than the `put` operation (typically consumes a few _microseconds_ on a worker thread). The third argument may be a version number or an options object that supports `append`, `appendDup`, `noOverwrite`, `noDupData`, and `version` for corresponding LMDB put flags.
### `store.removeSync(key, valueOrIfVersion?: number): boolean`
This will delete the entry at the specified key. This functions like `putSync`, providing synchronous entry deletion, and uses the same arguments as `remove`. This returns `true` if there was an existing entry deleted, `false` if there was no matching entry.
--------------------------------------------------------
<a name="lmdb_get"></a>
### lmdb#get(key[, options], callback)
<code>get()</code> is an instance method on an existing database object, used to fetch individual entries from the LMDB store.
### `store.ifVersion(key, ifVersion: number, callback): Promise<boolean>`
This executes a block of conditional writes, and conditionally execute any puts or removes that are called in the callback, using the provided condition that requires the provided key's entry to have the provided version.
The `key` object may either be a `String` or a Node.js `Buffer` object and cannot be `undefined` or `null`. Other object types are converted to JavaScript `String`s with the `toString()` method and the resulting `String` *may not* be a zero-length. A richer set of data-types are catered for in LevelUP.
### `store.ifNoExists(key, callback): Promise<boolean>`
This executes a block of conditional writes, and conditionally execute any puts or removes that are called in the callback, using the provided condition that requires the provided key's entry does not exist yet.
#### `options`
### `store.transactionSync(callback: Function)`
This will begin a synchronous transaction, executing the provided callback function, and then commit the transaction. The provided function can perform `get`s, `put`s, and `remove`s within the transaction, and the result will be committed. The `callback` function can return a promise to indicate an ongoing asynchronous transaction, but generally you want to minimize how long a transaction is open on the main thread, at least if you are potentially operating with multiple processes.
* `'asBuffer'` *(boolean, default: `true`)*: Used to determine whether to return the `value` of the entry as a `String` or a Node.js `Buffer` object. Note that converting from a `Buffer` to a `String` incurs a cost so if you need a `String` (and the `value` can legitimately become a UFT8 string) then you should fetch it as one with `asBuffer: true` and you'll avoid this conversion cost.
The callback may return the lmdb-store exported `ABORT` constant, or throw an error from the callback, to abort the transaction for this callback.
The `callback` function will be called with a single `error` if the operation failed for any reason. If successful the first argument will be `null` and the second argument will be the `value` as a `String` or `Buffer` depending on the `asBuffer` option.
If this is called inside an existing transaction and child transactions are supported (no write maps or caching), this will execute as a child transaction (and can be aborted), otherwise it will simply execute as part of the existing transaction (in which case it can't be aborted).
### `store.getRange(options: RangeOptions): Iterable<{ key, value: Buffer }>`
This starts a cursor-based query of a range of data in the database, returning an iterable that also has `map`, `filter`, and `forEach` methods. The `start` and `end` indicate the starting and ending key for the range. The `reverse` flag can be used to indicate reverse traversal. The `limit` can limit the number of entries returned. The returned cursor/query is lazy, and retrieves data _as_ iteration takes place, so a large range could specified without forcing all the entries to be read and loaded in memory upfront, and one can exit out of the loop without traversing the whole range in the database. The query is iterable, we can use it directly in a for-of:
```
for (let { key, value } of db.getRange({ start, end })) {
// for each key-value pair in the given range
}
```
Or we can use the provided iterative methods on the returned results:
```
db.getRange({ start, end })
.filter(({ key, value }) => test(key))
.forEach(({ key, value }) => {
// for each key-value pair in the given range that matched the filter
})
```
Note that `map` and `filter` are also lazy, they will only be executed once their returned iterable is iterated or `forEach` is called on it. The `map` and `filter` functions also support async/promise-based functions, and you can create async iterable if the callback functions execute asynchronously (return a promise).
--------------------------------------------------------
<a name="lmdb_del"></a>
### lmdb#del(key[, options], callback)
<code>del()</code> is an instance method on an existing database object, used to delete entries from the LMDB store.
We can also query with offset to skip a certain number of entries, and limit the number of entries to iterate through:
```
db.getRange({ start, end, offset: 10, limit: 10 }) // skip first 10 and get next 10
```
The `key` object may either be a `String` or a Node.js `Buffer` object and cannot be `undefined` or `null`. Other object types are converted to JavaScript `String`s with the `toString()` method and the resulting `String` *may not* be a zero-length. A richer set of data-types are catered for in LevelUP.
If you want to get a true array from the range results, the `asArray` property will return the results as an array.
The `callback` function will be called with no arguments if the operation is successful or with a single `error` argument if the operation failed for any reason.
#### Snapshots
By default a range iterator will use a database snapshot, using a single read transaction that remains open and gives a consistent view of the database at the time it was started, for the duration of iterating through the range. However, if the iteration will take place over a long period of time, keeping a read transaction open for a long time can interfere with LMDB's free space collection and reuse and increase the database size. If you will be using a long duration iterator, you can specify `snapshot: false` flag in the range options to indicate that it snapshotting is not necessary, and it can reset and renew read transactions while iterating, to allow LMDB to collect any space that was freed during iteration.
### `store.getValues(key, options?: RangeOptions): Iterable<any>`
When using a store with duplicate entries per key (with `dupSort` flag), you can use this to retrieve all the values for a given key. This will return an iterator just like `getRange`, except each entry will be the value from the database:
```
let db = store.openDB('my-index', {
dupSort: true
})
await db.put('key1', 'value1')
await db.put('key1', 'value2')
for (let value of db.getValues('key1')) {
// iterate values 'value1', 'value2'
}
await db.remove('key', 'value1') // only remove the second value under key1
for (let value of db.getValues('key1')) {
// just iterate value 'value1'
}
```
You can optionally provide a second argument with the same `options` that `getRange` handles.
--------------------------------------------------------
<a name="lmdb_batch"></a>
### lmdb#batch(operations[, options], callback)
<code>batch()</code> is an instance method on an existing database object. Used for very fast bulk-write operations (both *put* and *delete*). The `operations` argument should be an `Array` containing a list of operations to be executed sequentially, although as a whole they are executed within a single transaction on LMDB. Each operation is contained in an object having the following properties: `type`, `key`, `value`, where the *type* is either `'put'` or `'del'`. In the case of `'del'` the `'value'` property is ignored. Any entries with a `'key'` of `null` or `undefined` will cause an error to be returned on the `callback` and any `'type': 'put'` entry with a `'value'` of `null` or `undefined` will return an error. See [LevelUP](https://github.com/rvagg/node-levelup#batch) for full documentation on how this works in practice.
### `store.getKeys(options: RangeOptions): Iterable<any>`
This behaves like `getRange`, but only returns the keys. If this is duplicate key database, each key is only returned once (even if it has multiple values/entries).
The `callback` function will be called with no arguments if the operation is successful or with a single `error` argument if the operation failed for any reason.
### `RangeOptions`
Here are the options that can be provided to the range methods (all are optional):
* `start`: Starting key (will start at beginning of db, if not provided), can be any valid key type (primitive or array of primitives).
* `end`: Ending key (will finish at end of db, if not provided), can be any valid key type (primitive or array of primitives).
* `reverse`: Boolean key indicating reverse traversal through keys (does not do reverse by default).
* `limit`: Number indicating maximum number of entries to read (no limit by default).
* `offset`: Number indicating number of entries to skip before starting iteration (starts at 0 by default).
* `versions`: Boolean indicating if versions should be included in returned entries (not by default).
* `snapshot`: Boolean indicating if a database snapshot is used for iteration (true by default).
### `store.openDB(database: string|{name:string,...})`
LMDB supports multiple databases per environment (an environment is a single memory-mapped file). When you initialize an LMDB store with `open`, the store uses the default root database. However, you can use multiple databases per environment/file and instantiate a store for each one. If you are going to be opening many databases, make sure you set the `maxDbs` (it defaults to 12). For example, we can open multiple stores for a single environment:
```
const { open } = require('lmdb-store');
let rootStore = open('all-my-data');
let usersStore = myStore.openDB('users');
let groupsStore = myStore.openDB('groups');
let productsStore = myStore.openDB('products');
```
Each of the opened/returned stores has the same API as the default store for the environment. Each of the stores for one environment also share the same batch queue and automated transactions with each other, so immediately writing data from two stores with the same environment will be batched together in the same commit. For example:
```
usersStore.put('some-user', { data: userInfo });
groupsStore.put('some-group', { groupData: moreData });
```
Both these puts will be batched and committed in the same transaction in the next event turn.
Also, you can start a transaction from one store and make writes from any of the stores in that same environment (and they will be a part of the same transaction:
```
rootStore.transactionAsync(() => {
usersStore.put('some-user', { data: userInfo });
groupsStore.put('some-group', { groupData: moreData });
});
```
--------------------------------------------------------
<a name="lmdb_iterator"></a>
### lmdb#iterator([options])
<code>iterator()</code> is an instance method on an existing database object. It returns a new **Iterator** instance which abstracts an LMDB **"cursor"**.
### `getLastVersion(): number`
This returns the version number of the last entry that was retrieved with `get` (assuming it was a versioned database). If you are using a database with `cache` enabled, use `getEntry` instead.
#### `options`
### `close(): void`
This will close the current store. This closes the underlying LMDB database, and if this is the root database (opened with `open` as opposed to `store.openDB`), it will close the environment (and child stores will no longer be able to interact with the database).
The optional `options` object may contain:
### `store.doesExist(key, valueOrVersion): boolean`
This checks if an entry exists for the given key, and optionally verifies that the version or value exists. If this is a `dupSort` enabled database, you can provide the key and value to check if that key/value entry exists. If you are using a versioned database, you can provide a version number to verify if the entry for the provided key has the specific version number. This returns true if the entry does exist.
* `'start'`: the key you wish to start the read at. By default it will start at the beginning of the store. Note that the *start* doesn't have to be an actual key that exists, LMDB will simply jump to the *next* key, greater than the key you provide.
### `resetReadTxn(): void`
Normally, lmdb-store will automatically start a reader transaction for get and range operations, periodically reseting the read transaction on new event turns and after any write transactions are committed, to ensure it is using an up-to-date snapshot of the database. However, you can call `resetReadTxn` if you need to manually force the read transaction to reset to the latest snapshot/version of the database. In particular, this may be useful running with multiple processes where you need to immediately reset the read transaction based on a known update in another process (rather than waiting for the next event turn).
* `'end'`: the key you wish to end the read on. By default it will continue until the end of the store. Again, the *end* doesn't have to be an actual key as an (inclusive) `<=`-type operation is performed to detect the end. You can also use the `destroy()` method instead of supplying an `'end'` parameter to achieve the same effect.
## Concurrency and Versioning
LMDB and lmdb-store are designed for high concurrency, and we recommend using multiple processes to achieve concurrency with lmdb-store (processes are more robust than threads, and thread's advantage of shared memory is minimal with separate NodeJS isolates, and you still get shared memory access with processes when using LMDB). Versioning or asynchronous transactions are the preferred method for achieving atomicity with data updates with concurrency. A version can be stored with an entry, and later the data can be updated, conditional on the version being the expected version. This provides a robust mechanism for concurrent data updates even with multiple processes accessing the same database. To enable versioning, make sure to set the `useVersions` option when opening the database:
```
let myStore = open('my-store', { useVersions: true })
```
You can set a version by using the `version` argument in `put` calls. You can later update data and ensure that the data will only be updated if the version matches the expected version by using the `ifVersion` argument. When retrieving entries, you can access the version number by calling `getLastVersion()`.
* `'reverse'` *(boolean, default: `false`)*: a boolean, set to true if you want the stream to go in reverse order.
You can then make conditional writes, examples:
* `'keys'` *(boolean, default: `true`)*: whether the callback to the `next()` method should receive a non-null `key`. There is a small efficiency gain if you ultimately don't care what the keys are as they don't need to be converted and copied into JavaScript.
```
myStore.put('key1', 'value1', 4, 3); // new version of 4, only if previous version was 3
```
```
myStore.ifVersion('key1', 4, () => {
myStore.put('key1', 'value2', 5); // equivalent to myStore.put('key1', 'value2', 5, 4);
myStore.put('anotherKey', 'value', 3); // we can do other puts based on the same condition above
// we can make puts in other stores (from the same db environment) based on same condition too
myStore2.put('keyInOtherDb', 'value');
});
```
Asynchronous transactions are also a robust way to handle concurrency with multiple processes and provides a more traditional and flexible mechanism for making atomic ACID-compliant transactional data changes.
* `'values'` *(boolean, default: `true`)*: whether the callback to the `next()` method should receive a non-null `value`. There is a small efficiency gain if you ultimately don't care what the values are as they don't need to be converted and copied into JavaScript.
## Shared Structures
Shared structures are mechanism for storing the structural information about objects stored in database in dedicated entry, outside of individual entries, for reuse across all of the data in database, for much more efficient storage and faster retrieval of data when storing objects that have the same or similar structures (note that this is only available using the default MessagePack or CBOR encoding, using the msgpackr or cbor-x package). This is highly recommended when storing structured objects with similiar object structures (including inside of array) in lmdb-store. When enabled, when data is stored, any structural information (the set of property names) is automatically generated and stored in separate entry to be reused for storing and retrieving all data for the database. To enable this feature, simply specify the key where lmdb-store can store the shared structures. You can use a symbol as a metadata key, as symbols are outside of the range of the standard JS primitive values:
```
let myStore = open('my-store', {
sharedStructuresKey: Symbol.for('structures')
})
```
Once shared structures has been enabled, you can store JavaScript objects just as you would normally would, and lmdb-store will automatically generate, increment, and save the structural information in the provided key to improve storage efficiency and performance. You never need to directly access this key, just be aware that that entry is being used by lmdb-store.
* `'limit'` *(number, default: `-1`)*: limit the number of results collected by this iterator. This number represents a *maximum* number of results and may not be reached if you get to the end of the store or your `'end'` value first. A value of `-1` means there is no limit.
## Compression
lmdb-store can optionally use off-thread LZ4 compression as part of the asynchronous writes to enable efficient compression with virtually no overhead to the main thread. LZ4 decompression (in `get` and `getRange` calls) is extremely fast and generally has little impact on performance. Compression is turned off by default, but can be turned on by setting the `compression` property when opening a database. The value of compression can be `true` or an object with compression settings, including properties:
* `threshold` - Only entries that are larger than this value (in bytes) will be compressed. This defaults to 1000 (if compression is enabled)
* `dictionary` - This can be buffer to use as a shared dictionary. This is defaults to a shared dictionary in lmdb-store that helps with compressing JSON and English words in small entries. [Zstandard](https://facebook.github.io/zstd/#small-data) provides utilities for [creating your own optimized shared dictionary](https://github.com/lz4/lz4/releases/tag/v1.8.1.2).
For example:
```
let myStore = open('my-store', {
compression: {
threshold: 500, // compress any entry larger than 500 bytes
dictionary: fs.readFileSync('dict.txt') // use your own shared dictionary
}
})
```
Compression is recommended for large databases that may be close to or larger than available RAM, to improve caching and reduce page faults. If you use enable compression for a database, you must ensure that the data is always opened with the same compression setting, so that the data will be properly decompressed.
* `'keyAsBuffer'` *(boolean, default: `true`)*: Used to determine whether to return the `key` of each entry as a `String` or a Node.js `Buffer` object. Note that converting from a `Buffer` to a `String` incurs a cost so if you need a `String` (and the `value` can legitimately become a UFT8 string) then you should fetch it as one.
## Caching
lmdb-store supports caching of entries from stores, and uses a [LRU/LFU (LRFU) and weak-referencing caching mechanism](https://github.com/kriszyp/weak-lru-cache) for highly optimized caching and object tracking. There are several key potential benefits to using caching, including performance, key correlation with object identity, and immediate/synchronous access to saved data. Enabling caching will cache `get`s and `put`s, which can make frequent `get`s much faster. Caching is enabled by providing a truthy value for the `cache` property on the store `options`.
* `'valueAsBuffer'` *(boolean, default: `true`)*: Used to determine whether to return the `value` of each entry as a `String` or a Node.js `Buffer` object.
The weak-referencing mechanism works in harmony with JS garbage collection to allow objects to be cached without preventing GC, and retrieved from the cache until they have actually been collected from memory, making more efficient use of memory. This also can provide a guarantee of object identity correlation with keys: as long as retrieved object is in memory, a `get` will always return the existing object, and `get` never will return two copies of the same object (for the same key). The LRFU caching mechanism is scan-resistant, tracking frequency of usage as well as recency.
Because asynchronous `put` operations immediately go in the cache (and are pinned in the cache until committed), the caching enabled, `put` values can be retrieved via `get`, immediately and synchronously after the `put` call. Without caching enabled, you need wait for the `put` promise to resolve (or use asynchronous transactions) before you can access the stored value, but the cache enables the value to be immediately without waiting for the commit to finish:
```
store.put('hi', 'there');
store.get('hi'); // can immediately access value without having to await the promise
```
--------------------------------------------------------
<a name="iterator_next"></a>
### iterator#next(callback)
<code>next()</code> is an instance method on an existing iterator object, used to increment the underlying LMDB cursor and return the entry at that location.
While caching can improve performance, LMDB itself is extremely fast, and for small objects with sporadic access, caching may not improve performance. Caching tends to provide the most performance benefits for larger objects that may have more significant deserialization costs. Caching does not apply to `getRange` queries. Also note that this requires Node 14.10 or higher (or Node v13.0 with `--harmony-weak-ref` flag).
the `callback` function will be called with no arguments in any of the following situations:
If you are using caching with a database that has versions enabled, you should use the `getEntry` method to get the `value` and `version`, as `getLastVersion` will not be reliable (only returns the version when the data is accessed from the database).
* the iterator comes to the end of the store
* the `end` key has been reached; or
* the `limit` has been reached
### Asynchronous Transaction Ordering
Asynchronous single operations (`put` and `remove`) are executed in the order they were called, relative to each other. Likewise, asynchronous transaction callbacks (`transactionAsync` and `childTransaction`) are also executed in order relative to other asynchronous transaction callbacks. However, by default all queued asynchronous transaction callbacks are executed _after_ all queued asynchronous single operations. But, you can enable strict ordering so that asynchronous transactions executed in order _with_ the asynchronous single operations, by setting the `asyncTransactionOrder` property to 'strict'.
Otherwise, the `callback` function will be called with the following 3 arguments:
However, strict ordering comes with a couple of caveats. First, because lmdb-store executes asynchronous single operations on a separate transaction thread, but asynchronous transaction callbacks must execute on the main JS thread, if there is a lot of frequent switching back and forth between single operations and callbacks, this can significantly reduce performance since it requires substantial thread switching and event queuing.
* `error` - any error that occurs while incrementing the iterator.
* `key` - either a `String` or a Node.js `Buffer` object depending on the `keyAsBuffer` argument when the `createIterator()` was called.
* `value` - either a `String` or a Node.js `Buffer` object depending on the `valueAsBuffer` argument when the `createIterator()` was called.
Second, if there are asynchronous operations that have been performed, and asynchronous transaction callbacks that are waiting to be called, and a synchronous transaction is executed (`transactionSync`), this must interrupt and split the current asynchronous transaction batch, so the synchronous transaction can be executed (the synchronous transaction can not block to wait for the asynchronous if there are outstanding callbacks to execute as part of that async transaction, as that would result in a deadlock). This can potentially create an exception to the general rule that all asynchronous operations that are performed in one event turn will be part of the same transaction. Of course, each single asynchronous transaction callback is still guaranteed to execute in a single atomic transaction (and calls to `transactionSync` _during_ a asynchronous transaction callback are simply executed as part of the current transaction). With the default ordering of 'after', it is possible for the async transactions to be performed in a separate transaction than the single operations if executed. Setting the ordering to 'before' ensures they are always in the same transaction.
### Store Options
The open method can be used to create the main database/environment with the following signature:
`open(path, options)` or `open(options)`
Additional databases can be opened within the main database environment with:
`store.openDB(name, options)` or `store.openDB(options)`
If the `path` has an `.` in it, it is treated as a file name, otherwise it is treated as a directory name, where the data will be stored. The `options` argument to either of the functions should be an object, and supports the following properties, all of which are optional (except `name` if not otherwise specified):
* `name` - This is the name of the database. This defaults to null (which is the root database) when opening the database environment (`open`). When an opening a database within an environment (`openDB`), this is required, if not specified in first parameter.
* `encoding` - Sets the encoding for the database, which can be `'msgpack'`, `'json'`, `'cbor'`, `'string'`, or `'binary'`.
* `sharedStructuresKey` - Enables shared structures and sets the key where the shared structures will be stored.
* `compression` - This enables compression. This can be set a truthy value to enable compression with default settings, or it can be an object with compression settings.
* `cache` - Setting this to true enables caching. This can also be set to an object specifying the settings/options for the cache (see [settings for weak-lru-cache](https://github.com/kriszyp/weak-lru-cache#weaklrucacheoptions-constructor)).
* `useVersions` - Set this to true if you will be setting version numbers on the entries in the database. Note that you can not change this flag once a database has entries in it (or they won't be read correctly).
* `encryptionKey` - This enables encryption, and the provided value is the key that is used for encryption. This may be a buffer or string, but must be 32 bytes/characters long. This uses the Chacha8 cipher for fast and secure on-disk encryption of data.
* `keyIsBuffer` - This will cause the database to expect and return keys as node buffers.
* `keyIsUint32` - This will cause the database to expect and return keys as unsigned 32-bit integers.
* `dupSort` - Enables duplicate entries for keys. You will usually want to retrieve the values for a key with `getValues`.
* `strictAsyncOrder` - Maintain strict ordering of execution of asynchronous transaction callbacks relative to asynchronous single operations.
The following additional option properties are only available when creating the main database environment (`open`):
* `path` - This is the file path to the database environment file you will use.
* `maxDbs` - The maximum number of databases to be able to open ([there is some extra overhead if this is set very high](http://www.lmdb.tech/doc/group__mdb.html#gaa2fc2f1f37cb1115e733b62cab2fcdbc)).
* `maxReaders` - The maximum number of concurrent read transactions (readers) to be able to open ([more information](http://www.lmdb.tech/doc/group__mdb.html#gae687966c24b790630be2a41573fe40e2)).
* `commitDelay` - This is the amount of time to wait (in milliseconds) for batching write operations before committing the writes (in a transaction). This defaults to 0. A delay of 0 means more immediate commits with less latency (uses `setImmediate`), but a longer delay (which uses `setTimeout`) can be more efficient at collecting more writes into a single transaction and reducing I/O load. Note that NodeJS timers only have an effective resolution of about 10ms, so a `commitDelay` of 1ms will generally wait about 10ms.
--------------------------------------------------------
<a name="iterator_end"></a>
### iterator#end(callback)
<code>end()</code> is an instance method on an existing iterator object. The underlying LMDB cursor will be deleted and the `callback` function will be called with no arguments if the operation is successful or with a single `error` argument if the operation failed for any reason.
#### LMDB Flags
In addition, the following options map to LMDB's env flags, <a href="http://www.lmdb.tech/doc/group__mdb.html">described here</a>. None of these need to be set, the defaults can always be used, and only `noMemInit` is recommended, but others are available for boosting performance:
* `noMemInit` - This provides a small performance boost (when not using useWritemap) for writes, by skipping zero'ing out malloc'ed data, but can leave application data in unused portions of the database.
* `remapChunks` - This a flag to specify if dynamic memory mapping should be used. Enabling this generally makes read operations a little bit slower, but frees up more mapped memory, making it friendlier to other applications. This is enabled by default on 32-bit operating systems (which require this to go beyond 4GB database size) if `mapSize` is not specified, otherwise it is disabled by default.
* `mapSize` - This can be used to specify the initial amount of how much virtual memory address space (in bytes) to allocate for mapping to the database files. Setting a map size will typically disable `remapChunks` by default unless the size is larger than appropriate for the OS. Different OSes have different allocation limits.
* `pageSize` - This changes the page size of the database.
* `noReadAhead` - This disables read-ahead caching. Turning it off may help random read performance when the DB is larger than RAM and system RAM is full. However, this is not supported by all OSes, including Windows.
* `useWritemap` - Use writemaps, this improves performance by reducing malloc calls, but can increase risk of a stray pointer corrupting data. This is currently disabled on Windows.
* `noSubdir` - Treat `path` as a filename instead of directory (this is the default if the path appears to end with an extension and has '.' in it)
* `noSync` - Doesn't sync the data to disk. We highly discourage this flag, since it can result in data corruption and lmdb-store mitigates performance issues associated with disk syncs by batching.
* `noMetaSync` - This isn't as dangerous as `noSync`, but doesn't improve performance much either.
* `readOnly` - Self-descriptive.
* `mapAsync` - Not recommended, lmdb-store provides the means to ensure commits are performed in a separate thread (asyncronous to JS), and this prevents accurate notification of when flushes finish.
#### Serialization options
If you are using the default encoding of `'msgpack'`, the [msgpackr](https://github.com/kriszyp/msgpackr) package is used for serialization and deserialization. You can provide store options that are passed to msgpackr, as well. For example, these options can be potentially useful:
* `structuredClone` - This enables the structured cloning extensions that will encode object/cyclic references and additional built-in types/classes.
* `useFloat32: 4` - Encode floating point numbers in 32-bit format when possible.
<a name="support"></a>
Getting support
---------------
You can also use the CBOR format by specifying the encoding of `'cbor'` and installing the [cbor-x](https://github.com/kriszyp/cbor-x) package, which supports the same options.
There are multiple ways you can find help in using LMDB / LevelUP / LevelDB in Node.js:
## Events
* **IRC:** you'll find an active group of LevelUP users in the **##leveldb** channel on Freenode, including most of the contributors to this project.
* **Mailing list:** there is an active [Node.js LevelUP](https://groups.google.com/forum/#!forum/node-levelup) Google Group.
* **GitHub:** you're welcome to open an issue here on this GitHub repository if you have a question.
The `lmdb-store` instance is an <a href="https://nodejs.org/dist/latest-v11.x/docs/api/events.html#events_class_eventemitter">EventEmitter</a>, allowing application to listen to database events. There is just one event right now:
<a name="contributing"></a>
Contributing
------------
`beforecommit` - This event is fired before a batched operation begins to start a transaction to write all queued writes to the database. The callback function can perform additional (asynchronous) writes (`put` and `remove`) and they will be included in the transaction about to be performed (this can be useful for updating a global version stamp based on all previous writes, for example).
Node.js LMDB is an **OPEN Open Source Project**. This means that:
##### Build Options
A few LMDB options are available at build time, and can be specified with options with `npm install` (which can be specified in your package.json install script):
`npm install --use_robust=true`: This will enable LMDB's MDB_USE_ROBUST option, which uses robust semaphores/mutexes so that if you are using multiple processes, and one process dies in the middle of transaction, the OS will cleanup the semaphore/mutex, aborting the transaction and allowing other processes to run without hanging. There is a slight performance overhead, but this is recommended if you will be using multiple processes.
> Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
On MacOS, there is a default limit of 10 robust locked semaphores, which imposes a limit on the number of open write transactions (if you have over 10 db environments with a write transaction). If you need more concurrent write transactions, you can increase your maximum undoable semaphore count by setting kern.sysv.semmnu on your local computer. Otherwise don't use the robust mutex option. You can also try to minimize overlapping transactions and/or reduce the number of db environments (and use more databases within each environment).
See the [CONTRIBUTING.md](https://github.com/rvagg/lmdb/blob/master/CONTRIBUTING.md) file for more details.
## License
`lmdb-store` is licensed under the terms of the MIT license.
<a name="licence"></a>
Licence &amp; copyright
-------------------
Also note that LMDB: Symas (the authors of LMDB) [offers commercial support of LMDB](https://symas.com/lightning-memory-mapped-database/).
Copyright (c) 2012-2013 Node.js LMDB contributors.
This project has no funding needs. If you feel inclined to donate, donate to one of Kris's favorite charities like [Innovations in Poverty Action](https://www.poverty-action.org/) or any of [GiveWell](https://givewell.org)'s recommended charities.
Node.js LMDB is licensed under an MIT +no-false-attribs license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details.
## Related Projects
*Node.js LMDB builds on the excellent work of the Howard Chu of Symas Corp and additional contributors. LMDB are both issued under the [The OpenLDAP Public License](http://www.OpenLDAP.org/license.html).*
* lmdb-store is built on top of [node-lmdb](https://github.com/Venemo/node-lmdb)
* lmdb-store uses msgpackr for the default serialization of data [msgpackr](https://github.com/kriszyp/msgpackr)
* cobase is built on top of lmdb-store: [cobase](https://github.com/DoctorEvidence/cobase)
<a href="https://dev.doctorevidence.com/"><img src="./assets/powers-dre.png" width="203"/></a>

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc