Comparing version 2.0.0-alpha2 to 2.0.0-beta1
{ | ||
"name": "lmdb", | ||
"author": "Kris Zyp", | ||
"version": "2.0.0-alpha2", | ||
"version": "2.0.0-beta1", | ||
"description": "Simple, efficient, scalable data store wrapper for LMDB", | ||
@@ -6,0 +6,0 @@ "license": "MIT", |
@@ -15,3 +15,2 @@ [![license](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE) | ||
* Optional native off-main-thread compression with high-performance LZ4 compression | ||
* Minimal dependencies to ensure stability and efficient memory use | ||
* And ridiculously fast and efficient: | ||
@@ -26,2 +25,4 @@ | ||
This library has minimal, tightly-controlled, and maintained dependencies to ensure stability and efficient memory use. It supports both ESM and CJS usage. | ||
This has replaced the previously deprecated (LevelDOWN) `lmdb` package in the NPM package registry, but existing versions of that library are [still available](https://www.npmjs.com/package/lmdb/v/0.2.0). | ||
@@ -46,5 +47,3 @@ | ||
``` | ||
const { open } = require('lmdb'); | ||
// or | ||
// import { open } from 'lmdb'; | ||
import { open } from 'lmdb'; // or require | ||
let myStore = open({ | ||
@@ -80,2 +79,3 @@ path: 'my-db', | ||
``` | ||
null // lowest possible value | ||
Symbol.for('even symbols') | ||
@@ -92,4 +92,5 @@ -10 // negative supported | ||
['hello', 'world'] | ||
Buffer.from([255]) // buffers can be used directly, 255 is higher than any byte produced by primitives | ||
``` | ||
You can override the default encoding of keys, and cause keys to be returned as node buffers using the `keyIsBuffer` database option (generally slower), or use `keyIsUint32` for keys that are strictly 32-bit unsigned integers. | ||
You can override the default encoding of keys, and cause keys to be returned as node buffers using the `keyIsBuffer` database option (generally slower), use `keyIsUint32` for keys that are strictly 32-bit unsigned integers, or provide a custom key encoder/decoder with `keyEncoder` (see custom key encoding). | ||
@@ -102,3 +103,3 @@ Once you created have a store, the following methods are available: | ||
### `store.getEntry(key): any` | ||
This will retrieve the the entry at the specified key. The `key` must be a JS value/primitive as described above, and the return value will be the stored data (dependent on the encoding), or `undefined` if the entry does not exist. An entry is object with a `value` property for the value in the database, and a `version` property for the version number of the entry in the database (if `useVersions` is enabled for the database). | ||
This will retrieve the the entry at the specified key. The `key` must be a JS value/primitive as described above, and the return value will be the stored entry, or `undefined` if the entry does not exist. An entry is object with a `value` property for the value in the database (as returned by `store.get`), and a `version` property for the version number of the entry in the database (if `useVersions` is enabled for the database). | ||
@@ -355,2 +356,3 @@ ### `store.put(key, value, version?: number, ifVersion?: number): Promise<boolean>` | ||
* `keyIsUint32` - This will cause the database to expect and return keys as unsigned 32-bit integers. | ||
* `keyEncoder` - Provide a custom key encoder. | ||
* `dupSort` - Enables duplicate entries for keys. You will usually want to retrieve the values for a key with `getValues`. | ||
@@ -395,5 +397,9 @@ * `strictAsyncOrder` - Maintain strict ordering of execution of asynchronous transaction callbacks relative to asynchronous single operations. | ||
The `separateFlushed` defaults to whatever `overlappedSync` was set to. However, you can explicitly set. If you want to use `overlappingSync`, but have all write operations resolve when the transaction is fully flushed and durable, you can set `separateFlushed` to `false`. Alternately, if you want to use different `overlappingSync` settings, but also have a `flushed` promise, you can set `separateFlushed` to `true`. | ||
The `separateFlushed` defaults to whatever `overlappedSync` was set to. However, you can explicitly set it. If you want to use `overlappingSync`, but have all write operations resolve when the transaction is fully flushed and durable, you can set `separateFlushed` to `false`. Alternately, if you want to use differing `overlappingSync` settings, but also have a `flushed` promise, you can set `separateFlushed` to `true`. | ||
Enabling `overlappingSync` option is probably not helpful on Windows, as Window's disk flushing operation tends to have poor performance characteristic (whereas Windows tends to perform well with standard transactions), but YMMV. This option may be enabled by default in the future, for non-Windows platforms. | ||
Enabling `overlappingSync` option is generally not recommended on Windows, as Window's disk flushing operation tends to have poor performance characteristics on larger databases (whereas Windows tends to perform well with standard transactions), but YMMV. This option may be enabled by default in the future, for non-Windows platforms, this is probably a good setting: | ||
``` | ||
overlappingSync: os.platform() != 'win32', | ||
separateFlushed: true, | ||
``` | ||
@@ -407,2 +413,7 @@ #### Serialization options | ||
## Custom Key Encoding | ||
Custom key encoding can be useful for defining more efficient encodings of specific keys like UUIDs. Custom key encoding can be specified by providing a `keyEncoder` object with the following methods: | ||
* `writeKey(key, targetBuffer, startPosition)` - This should write the provided key to the target buffer and returning the end position in the buffer. | ||
* `readKey(sourceBuffer, start, end)` - This should read the key from the provided buffer, with provided start and end position in the buffer, returning the key. | ||
## Events | ||
@@ -412,3 +423,3 @@ | ||
`beforecommit` - This event is fired before a batched operation begins to start a transaction to write all queued writes to the database. The callback function can perform additional (asynchronous) writes (`put` and `remove`) and they will be included in the transaction about to be performed (this can be useful for updating a global version stamp based on all previous writes, for example). | ||
`beforecommit` - This event is fired before a batched operation begins to start a transaction to write all queued writes to the database. The callback function can perform additional (asynchronous) writes (`put` and `remove`) and they will be included in the transaction about to be performed (this can be useful for updating a global version stamp based on all previous writes, for example). Using this event forces `eventTurnBatching` to be enabled. | ||
@@ -415,0 +426,0 @@ ## LevelUp |
@@ -164,2 +164,10 @@ import path from 'path'; | ||
}); | ||
it('clear between puts', async function() { | ||
db.put('key0', 'zero') | ||
db.clearAsync() | ||
await db.put('key1', 'one') | ||
should.equal(db.get('key0'), undefined) | ||
should.equal(db.get('hello'), undefined) | ||
should.equal(db.get('key1'), 'one') | ||
}) | ||
it('string', async function() { | ||
@@ -570,3 +578,5 @@ await db.put('key1', 'Hello world!'); | ||
expect(() => db.get({ foo: 'bar' })).to.throw(); | ||
//expect(() => db.put({ foo: 'bar' }, 'hello')).to.throw(); | ||
expect(() => db.put({ foo: 'bar' }, 'hello')).to.throw(); | ||
expect(() => db.put('x'.repeat(1979), 'hello')).to.throw(); | ||
expect(() => db2.put('x', 'x'.repeat(1979))).to.throw(); | ||
}); | ||
@@ -573,0 +583,0 @@ it('put options (sync)', function() { |
@@ -7,3 +7,3 @@ import { getAddressShared as getAddress } from './native.js' | ||
const WAITING_OPERATION = 0x2000000 | ||
const BACKPRESSURE_THRESHOLD = 30000000 | ||
const BACKPRESSURE_THRESHOLD = 50000 | ||
const TXN_DELIMITER = 0x8000000 | ||
@@ -68,3 +68,3 @@ const TXN_COMMITTED = 0x10000000 | ||
let targetBytes, position | ||
let valueBuffer | ||
let valueBuffer, valueSize, valueBufferStart | ||
if (flags & 2) { | ||
@@ -87,3 +87,11 @@ // encode first in case we have to write a shared structure | ||
throw new Error('Invalid value to put in database ' + value + ' (' + (typeof value) +'), consider using encoder') | ||
} | ||
valueBufferStart = valueBuffer.start | ||
if (valueBufferStart > -1) // if we have buffers with start/end position | ||
valueSize = valueBuffer.end - valueBufferStart // size | ||
else | ||
valueSize = valueBuffer.length | ||
if (store.dupSort && valueSize > MAX_KEY_SIZE) | ||
throw new Error('The value is larger than the maximum size (' + MAX_KEY_SIZE + ') for a value in a dupSort database') | ||
} else | ||
valueSize = 0 | ||
if (writeTxn) { | ||
@@ -119,3 +127,2 @@ targetBytes = fixedBuffer | ||
let uint32 = targetBytes.uint32, float64 = targetBytes.float64 | ||
let valueSize = 0 | ||
let flagPosition = position << 1 // flagPosition is the 32-bit word starting position | ||
@@ -139,4 +146,4 @@ | ||
if (keySize > MAX_KEY_SIZE) { | ||
targetBytes.fill(0, keyStartPosition) | ||
throw new Error('Key size is too large') | ||
targetBytes.fill(0, keyStartPosition) // restore zeros | ||
throw new Error('Key size is larger than the maximum key size (' + MAX_KEY_SIZE + ')') | ||
} | ||
@@ -146,10 +153,7 @@ uint32[flagPosition + 2] = keySize | ||
if (flags & 2) { | ||
let start = valueBuffer.start | ||
if (start > -1) { // if we have buffers with start/end position | ||
valueSize = valueBuffer.end - start // size | ||
if (valueBufferStart > -1) { // if we have buffers with start/end position | ||
// record pointer to value buffer | ||
float64[position] = (valueBuffer.address || | ||
(valueBuffer.address = getAddress(valueBuffer.buffer) + valueBuffer.byteOffset)) + start | ||
(valueBuffer.address = getAddress(valueBuffer.buffer) + valueBuffer.byteOffset)) + valueBufferStart | ||
} else { | ||
valueSize = valueBuffer.length | ||
let valueArrayBuffer = valueBuffer.buffer | ||
@@ -212,3 +216,3 @@ // record pointer to value buffer | ||
flag: 0, // TODO: eventually eliminate this, as we can probably signify success by zeroing the flagPosition | ||
valueBuffer, | ||
valueBuffer: fixedBuffer, // these are all just placeholders so that we have the right hidden class initially allocated | ||
next: null, | ||
@@ -223,3 +227,3 @@ key, | ||
flag: 0, // TODO: eventually eliminate this, as we can probably signify success by zeroing the flagPosition | ||
valueBuffer, | ||
valueBuffer: fixedBuffer, // these are all just placeholders so that we have the right hidden class initially allocated | ||
next: null, | ||
@@ -273,3 +277,3 @@ } | ||
backpressureArray = new Int32Array(new SharedArrayBuffer(4), 0, 1) | ||
Atomics.wait(backpressureArray, 0, 0, 1) | ||
Atomics.wait(backpressureArray, 0, 0, Math.round(outstandingWriteCount / BACKPRESSURE_THRESHOLD)) | ||
} | ||
@@ -362,3 +366,3 @@ if (startAddress) { | ||
txnResolution.nextTxn = resolution | ||
outstandingWriteCount = 0 | ||
//outstandingWriteCount = 0 | ||
} | ||
@@ -365,0 +369,0 @@ else |
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
Native code
Supply chain riskContains native code (e.g., compiled binaries or shared libraries). Including native code can obscure malicious behavior.
Found 4 instances in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
14270551
151
7035
459
26