LevelUP
Fast & simple storage - a Node.js-style LevelDB wrapper
Introduction
LevelDB is a simple key/value data store built by Google, inspired by BigTable. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, bi-directional iterators and simple compression using the very fast Snappy algorithm.
LevelUP aims to expose the features of LevelDB in a Node.js-friendly way. All standard Buffer
encoding types are supported, as is a special JSON encoding. LevelDB's iterators are exposed as a Node.js-style readable stream a matching writeable stream converts writes to batch operations.
LevelDB stores entries sorted lexicographically by keys. This makes LevelUP's ReadStream
interface a very powerful query mechanism.
LevelUP is an OPEN Open Source Project, see the Contributing section to find out what this means.
Relationship to LevelDOWN
LevelUP is designed to be backed by LevelDOWN which provides a pure C++ binding to LevelDB and can be used as a stand-alone package if required.
As of version 0.9, LevelUP no longer requires LevelDOWN as a dependency so you must npm install leveldown
when you install LevelUP.
LevelDOWN is now optional because LevelUP can be used with alternative backends, such as level.js in the browser or MemDOWN for a pure in-memory store.
LevelUP will look for LevelDOWN and throw an error if it can't find it in its Node require()
path. It will also tell you if the installed version of LevelDOWN is incompatible.
The level package is available as an alternative installation mechanism. Install it instead to automatically get both LevelUP & LevelDOWN. It exposes LevelUP on its export (i.e. you can var leveldb = require('level')
).
Tested & supported platforms
- Linux: including ARM platforms such as Raspberry Pi and Kindle!
- Mac OS
- Solaris: including Joyent's SmartOS & Nodejitsu
- Windows: Node 0.10 and above only. See installation instructions for node-gyp's dependencies here, you'll need these (free) components from Microsoft to compile and run any native Node add-on in Windows.
Basic usage
First you need to install LevelUP!
$ npm install levelup leveldown
Or
$ npm install level
(this second option requires you to use LevelUP by calling var levelup = require('level')
)
All operations are asynchronous although they don't necessarily require a callback if you don't need to know when the operation was performed.
var levelup = require('levelup')
var db = levelup('./mydb')
db.put('name', 'LevelUP', function (err) {
if (err) return console.log('Ooops!', err)
db.get('name', function (err, value) {
if (err) return console.log('Ooops!', err)
console.log('name=' + value)
})
})
API
Special operations exposed by LevelDOWN
levelup(location[, options[, callback]])
levelup(options[, callback ])
levelup(db[, callback ])
levelup()
is the main entry point for creating a new LevelUP instance and opening the underlying store with LevelDB.
This function returns a new instance of LevelUP and will also initiate an open()
operation. Opening the database is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form: function (err, db) {}
where the db
is the LevelUP instance. If you don't provide a callback, any read & write operations are simply queued internally until the database is fully opened.
This leads to two alternative ways of managing a new LevelUP instance:
levelup(location, options, function (err, db) {
if (err) throw err
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
})
var db = levelup(location, options)
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
The location
argument is available as a read-only property on the returned LevelUP instance.
The levelup(options, callback)
form (with optional callback
) is only available where you provide a valid 'db'
property on the options object (see below). Only for back-ends that don't require a location
argument, such as MemDOWN.
For example:
var levelup = require('levelup')
var memdown = require('memdown')
var db = levelup({ db: memdown })
The levelup(db, callback)
form (with optional callback
) is only available where db
is a factory function, as would be provided as a 'db'
property on an options
object (see below). Only for back-ends that don't require a location
argument, such as MemDOWN.
For example:
var levelup = require('levelup')
var memdown = require('memdown')
var db = levelup(memdown)
options
levelup()
takes an optional options object as its second argument; the following properties are accepted:
-
'createIfMissing'
(boolean, default: true
): If true
, will initialise an empty database at the specified location if one doesn't already exist. If false
and a database doesn't exist you will receive an error in your open()
callback and your database won't open.
-
'errorIfExists'
(boolean, default: false
): If true
, you will receive an error in your open()
callback if the database exists at the specified location.
-
'compression'
(boolean, default: true
): If true
, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.
-
'cacheSize'
(number, default: 8 * 1024 * 1024
): The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.
-
'keyEncoding'
and 'valueEncoding'
(string, default: 'utf8'
): The encoding of the keys and values passed through Node.js' Buffer
implementation (see Buffer#toString()).
'utf8'
is the default encoding for both keys and values so you can simply pass in strings and expect strings from your get()
operations. You can also pass Buffer
objects as keys and/or values and conversion will be performed.
Supported encodings are: hex, utf8, ascii, binary, base64, ucs2, utf16le.
'json'
encoding is also supported, see below.
-
'db'
(object, default: LevelDOWN): LevelUP is backed by LevelDOWN to provide an interface to LevelDB. You can completely replace the use of LevelDOWN by providing a "factory" function that will return a LevelDOWN API compatible object given a location
argument. For further information, see MemDOWN, a fully LevelDOWN API compatible replacement that uses a memory store rather than LevelDB. Also see Abstract LevelDOWN, a partial implementation of the LevelDOWN API that can be used as a base prototype for a LevelDOWN substitute.
Additionally, each of the main interface methods accept an optional options object that can be used to override 'keyEncoding'
and 'valueEncoding'
.
db.open([callback])
open()
opens the underlying LevelDB store. In general you should never need to call this method directly as it's automatically called by levelup()
.
However, it is possible to reopen a database after it has been closed with close()
, although this is not generally advised.
db.close([callback])
close()
closes the underlying LevelDB store. The callback will receive any error encountered during closing as the first argument.
You should always clean up your LevelUP instance by calling close()
when you no longer need it to free up resources. A LevelDB store cannot be opened by multiple instances of LevelDB/LevelUP simultaneously.
db.put(key, value[, options][, callback])
put()
is the primary method for inserting data into the store. Both the key
and value
can be arbitrary data objects.
The callback argument is optional but if you don't provide one and an error occurs then expect the error to be thrown.
options
Encoding of the key
and value
objects will adhere to 'keyEncoding'
and 'valueEncoding'
options provided to levelup()
, although you can provide alternative encoding settings in the options for put()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
If you provide a 'sync'
value of true
in your options
object, LevelDB will perform a synchronous write of the data; although the operation will be asynchronous as far as Node is concerned. Normally, LevelDB passes the data to the operating system for writing and returns immediately, however a synchronous write will use fsync()
or equivalent so your callback won't be triggered until the data is actually on disk. Synchronous filesystem writes are significantly slower than asynchronous writes but if you want to be absolutely sure that the data is flushed then you can use 'sync': true
.
db.get(key[, options][, callback])
get()
is the primary method for fetching data from the store. The key
can be an arbitrary data object. If it doesn't exist in the store then the callback will receive an error as its first argument. A not-found err object will be of type 'NotFoundError'
so you can err.type == 'NotFoundError'
or you can perform a truthy test on the property err.notFound
.
db.get('foo', function (err, value) {
if (err) {
if (err.notFound) {
return
}
return callback(err)
}
})
options
Encoding of the key
object will adhere to the 'keyEncoding'
option provided to levelup()
, although you can provide alternative encoding settings in the options for get()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting fillCache
to false
.
db.del(key[, options][, callback])
del()
is the primary method for removing data from the store.
options
Encoding of the key
object will adhere to the 'keyEncoding'
option provided to levelup()
, although you can provide alternative encoding settings in the options for del()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
A 'sync'
option can also be passed, see put()
for details on how this works.
db.batch(array[, options][, callback]) (array form)
batch()
can be used for very fast bulk-write operations (both put and delete). The array
argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside LevelDB. Each operation is contained in an object having the following properties: type
, key
, value
, where the type is either 'put'
or 'del'
. In the case of 'del'
the 'value'
property is ignored. Any entries with a 'key'
of null
or undefined
will cause an error to be returned on the callback
and any 'type': 'put'
entry with a 'value'
of null
or undefined
will return an error.
var ops = [
{ type: 'del', key: 'father' }
, { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
, { type: 'put', key: 'dob', value: '16 February 1941' }
, { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
, { type: 'put', key: 'occupation', value: 'Clown' }
]
db.batch(ops, function (err) {
if (err) return console.log('Ooops!', err)
console.log('Great success dear leader!')
})
options
See put()
for a discussion on the options
object. You can overwrite default 'keyEncoding'
and 'valueEncoding'
and also specify the use of sync
filesystem operations.
In addition to encoding options for the whole batch you can also overwrite the encoding per operation, like:
var ops = [{
type : 'put'
, key : new Buffer([1, 2, 3])
, value : { some: 'json' }
, keyEncoding : 'binary'
, valueEncoding : 'json'
}]
db.batch() (chained form)
batch()
, when called with no arguments will return a Batch
object which can be used to build, and eventually commit, an atomic LevelDB batch operation. Depending on how it's used, it is possible to obtain greater performance when using the chained form of batch()
over the array form.
db.batch()
.del('father')
.put('name', 'Yuri Irsenovich Kim')
.put('dob', '16 February 1941')
.put('spouse', 'Kim Young-sook')
.put('occupation', 'Clown')
.write(function () { console.log('Done!') })
batch.put(key, value[, options])
Queue a put operation on the current batch, not committed until a write()
is called on the batch.
The optional options
argument can be used to override the default 'keyEncoding'
and/or 'valueEncoding'
.
This method may throw
a WriteError
if there is a problem with your put (such as the value
being null
or undefined
).
batch.del(key[, options])
Queue a del operation on the current batch, not committed until a write()
is called on the batch.
The optional options
argument can be used to override the default 'keyEncoding'
.
This method may throw
a WriteError
if there is a problem with your delete.
batch.clear()
Clear all queued operations on the current batch, any previous operations will be discarded.
batch.write([callback])
Commit the queued operations for this batch. All operations not cleared will be written to the database atomically, that is, they will either all succeed or fail with no partial commits. The optional callback
will be called when the operation has completed with an error argument if an error has occurred; if no callback
is supplied and an error occurs then this method will throw
a WriteError
.
db.isOpen()
A LevelUP object can be in one of the following states:
- "new" - newly created, not opened or closed
- "opening" - waiting for the database to be opened
- "open" - successfully opened the database, available for use
- "closing" - waiting for the database to be closed
- "closed" - database has been successfully closed, should not be used
isOpen()
will return true
only when the state is "open".
db.isClosed()
See isOpen()
isClosed()
will return true
only when the state is "closing" or "closed", it can be useful for determining if read and write operations are permissible.
db.createReadStream([options])
You can obtain a ReadStream of the full database by calling the createReadStream()
method. The resulting stream is a complete Node.js-style Readable Stream where 'data'
events emit objects with 'key'
and 'value'
pairs.
db.createReadStream()
.on('data', function (data) {
console.log(data.key, '=', data.value)
})
.on('error', function (err) {
console.log('Oh my!', err)
})
.on('close', function () {
console.log('Stream closed')
})
.on('end', function () {
console.log('Stream closed')
})
The standard pause()
, resume()
and destroy()
methods are implemented on the ReadStream, as is pipe()
(see below). 'data'
, 'error'
, 'end'
and 'close'
events are emitted.
Additionally, you can supply an options object as the first parameter to createReadStream()
with the following options:
-
'start'
: the key you wish to start the read at. By default it will start at the beginning of the store. Note that the start doesn't have to be an actual key that exists, LevelDB will simply find the next key, greater than the key you provide.
-
'end'
: the key you wish to end the read on. By default it will continue until the end of the store. Again, the end doesn't have to be an actual key as an (inclusive) <=
-type operation is performed to detect the end. You can also use the destroy()
method instead of supplying an 'end'
parameter to achieve the same effect.
-
'reverse'
(boolean, default: false
): a boolean, set to true if you want the stream to go in reverse order. Beware that due to the way LevelDB works, a reverse seek will be slower than a forward seek.
-
'keys'
(boolean, default: true
): whether the 'data'
event should contain keys. If set to true
and 'values'
set to false
then 'data'
events will simply be keys, rather than objects with a 'key'
property. Used internally by the createKeyStream()
method.
-
'values'
(boolean, default: true
): whether the 'data'
event should contain values. If set to true
and 'keys'
set to false
then 'data'
events will simply be values, rather than objects with a 'value'
property. Used internally by the createValueStream()
method.
-
'limit'
(number, default: -1
): limit the number of results collected by this stream. This number represents a maximum number of results and may not be reached if you get to the end of the store or your 'end'
value first. A value of -1
means there is no limit.
-
'fillCache'
(boolean, default: false
): wheather LevelDB's LRU-cache should be filled with data read.
-
'keyEncoding'
/ 'valueEncoding'
(string): the encoding applied to each read piece of data.
db.createKeyStream([options])
A KeyStream is a ReadStream where the 'data'
events are simply the keys from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a KeyStream either by calling the createKeyStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with keys
set to true
and values
set to false
.
db.createKeyStream()
.on('data', function (data) {
console.log('key=', data)
})
db.createReadStream({ keys: true, values: false })
.on('data', function (data) {
console.log('key=', data)
})
db.createValueStream([options])
A ValueStream is a ReadStream where the 'data'
events are simply the values from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a ValueStream either by calling the createValueStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with values
set to true
and keys
set to false
.
db.createValueStream()
.on('data', function (data) {
console.log('value=', data)
})
db.createReadStream({ keys: false, values: true })
.on('data', function (data) {
console.log('value=', data)
})
db.createWriteStream([options])
A WriteStream can be obtained by calling the createWriteStream()
method. The resulting stream is a complete Node.js-style Writable Stream which accepts objects with 'key'
and 'value'
pairs on its write()
method.
The WriteStream will buffer writes and submit them as a batch()
operations where writes occur within the same tick.
var ws = db.createWriteStream()
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
ws.write({ key: 'dob', value: '16 February 1941' })
ws.write({ key: 'spouse', value: 'Kim Young-sook' })
ws.write({ key: 'occupation', value: 'Clown' })
ws.end()
The standard write()
, end()
, destroy()
and destroySoon()
methods are implemented on the WriteStream. 'drain'
, 'error'
, 'close'
and 'pipe'
events are emitted.
You can specify encodings both for the whole stream and individual entries:
To set the encoding for the whole stream, provide an options object as the first parameter to createWriteStream()
with 'keyEncoding'
and/or 'valueEncoding'
.
To set the encoding for an individual entry:
writeStream.write({
key : new Buffer([1, 2, 3])
, value : { some: 'json' }
, keyEncoding : 'binary'
, valueEncoding : 'json'
})
write({ type: 'put' })
If individual write()
operations are performed with a 'type'
property of 'del'
, they will be passed on as 'del'
operations to the batch.
var ws = db.createWriteStream()
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ type: 'del', key: 'name' })
ws.write({ type: 'del', key: 'dob' })
ws.write({ type: 'put', key: 'spouse' })
ws.write({ type: 'del', key: 'occupation' })
ws.end()
db.createWriteStream({ type: 'del' })
If the WriteStream is created with a 'type'
option of 'del'
, all write()
operations will be interpreted as 'del'
, unless explicitly specified as 'put'
.
var ws = db.createWriteStream({ type: 'del' })
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ key: 'name' })
ws.write({ key: 'dob' })
ws.write({ type: 'put', key: 'spouse', value: 'Ri Sol-ju' })
ws.write({ key: 'occupation' })
ws.end()
Pipes and Node Stream compatibility
A ReadStream can be piped directly to a WriteStream, allowing for easy copying of an entire database. A simple copy()
operation is included in LevelUP that performs exactly this on two open databases:
function copy (srcdb, dstdb, callback) {
srcdb.createReadStream().pipe(dstdb.createWriteStream()).on('close', callback)
}
The ReadStream is also fstream-compatible which means you should be able to pipe to and from fstreams. So you can serialize and deserialize an entire database to a directory where keys are filenames and values are their contents, or even into a tar file using node-tar. See the fstream functional test for an example. (Note: I'm not really sure there's a great use-case for this but it's a fun example and it helps to harden the stream implementations.)
KeyStreams and ValueStreams can be treated like standard streams of raw data. If 'keyEncoding'
or 'valueEncoding'
is set to 'binary'
the 'data'
events will simply be standard Node Buffer
objects straight out of the data store.
db.db.approximateSize(start, end, callback)
approximateSize()
can used to get the approximate number of bytes of file system space used by the range [start..end)
. The result may not include recently written data.
var db = require('level')('./huge.db')
db.db.approximateSize('a', 'c', function (err, size) {
if (err) return console.error('Ooops!', err)
console.log('Approximate size of range is %d', size)
})
Note: approximateSize()
is available via LevelDOWN, which by default is accessible as the db
property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db'
option.
db.db.getProperty(property)
getProperty
can be used to get internal details from LevelDB. When issued with a valid property string, a readable string will be returned (this method is synchronous).
Currently, the only valid properties are:
-
'leveldb.num-files-at-levelN'
: returns the number of files at level N, where N is an integer representing a valid level (e.g. "0").
-
'leveldb.stats'
: returns a multi-line string describing statistics about LevelDB's internal operation.
-
'leveldb.sstables'
: returns a multi-line string describing all of the sstables that make up contents of the current database.
var db = require('level')('./huge.db')
console.log(db.db.getProperty('leveldb.num-files-at-level3'))
Note: getProperty()
is available via LevelDOWN, which by default is accessible as the db
property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db'
option.
leveldown.destroy(location, callback)
destroy()
is used to completely remove an existing LevelDB database directory. You can use this function in place of a full directory rm if you want to be sure to only remove LevelDB-related files. If the directory only contains LevelDB files, the directory itself will be removed as well. If there are additional, non-LevelDB files in the directory, those files, and the directory, will be left alone.
The callback will be called when the destroy operation is complete, with a possible error
argument.
Note: destroy()
is available via LevelDOWN which you will have to have available to require()
, e.g.:
require('leveldown').destroy('./huge.db', function () { console.log('done!') })
leveldown.repair(location, callback)
repair()
can be used to attempt a restoration of a damaged LevelDB store. From the LevelDB documentation:
If a DB cannot be opened, you may attempt to call this method to resurrect as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function on a database that contains important information.
You will find information on the repair operation in the LOG file inside the store directory.
A repair()
can also be used to perform a compaction of the LevelDB log into table files.
The callback will be called when the repair operation is complete, with a possible error
argument.
Note: destroy()
is available via LevelDOWN which you will have to have available to require()
, e.g.:
require('leveldown').repair('./huge.db', function () { console.log('done!') })
Events
LevelUP emits events when the callbacks to the corresponding methods are called.
db.emit('put', key, value)
emitted when a new value is 'put'
db.emit('del', key)
emitted when a value is deleteddb.emit('batch', ary)
emitted when a batch operation has executeddb.emit('ready')
emitted when the database has opened ('open'
is synonym)db.emit('closed')
emitted when the database has closeddb.emit('opening')
emitted when the database is openingdb.emit('closing')
emitted when the database is closing
If you do not pass a callback to an async function, and there is an error, LevelUP will emit('error', err)
instead.
JSON data
You specify 'json'
encoding for both keys and/or values, you can then supply JavaScript objects to LevelUP and receive them from all fetch operations, including ReadStreams. LevelUP will automatically stringify your objects and store them as utf8 and parse the strings back into objects before passing them back to you.
Custom encodings
A custom encoding may be provided by passing in an object as an value for keyEncoding
or valueEncoding
(wherever accepted), it must have the following properties:
{
encode : function (val) { ... }
, decode : function (val) { ... }
, buffer : boolean
, type : String
}
"buffer-like" means either a Buffer
if running in Node, or a Uint8Array if in a browser. Use bops to get portable binary operations.
Extending LevelUP
A list of Node.js LevelDB modules and projects can be found in the wiki.
When attempting to extend the functionality of LevelUP, it is recommended that you consider using level-hooks and/or level-sublevel. level-sublevel is particularly helpful for keeping additional, extension-specific, data in a LevelDB store. It allows you to partition a LevelUP instance into multiple sub-instances that each correspond to discrete namespaced key ranges.
Multi-process access
LevelDB is thread-safe but is not suitable for accessing with multiple processes. You should only ever have a LevelDB database open from a single Node.js process. Node.js clusters are made up of multiple processes so a LevelUP instance cannot be shared between them either.
See the wiki for some LevelUP extensions, including multilevel, that may help if you require a single data store to be shared across processes.
Getting support
There are multiple ways you can find help in using LevelDB in Node.js:
- IRC: you'll find an active group of LevelUP users in the ##leveldb channel on Freenode, including most of the contributors to this project.
- Mailing list: there is an active Node.js LevelDB Google Group.
- GitHub: you're welcome to open an issue here on this GitHub repository if you have a question.
Contributing
LevelUP is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the CONTRIBUTING.md file for more details.
Contributors
LevelUP is only possible due to the excellent work of the following contributors:
Windows
A large portion of the Windows support comes from code by Krzysztof Kowalczyk @kjk, see his Windows LevelDB port here. If you're using LevelUP on Windows, you should give him your thanks!
Licence & copyright
Copyright (c) 2012-2013 LevelUP contributors (listed above).
LevelUP is licensed under an MIT +no-false-attribs license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details.
=======
LevelUP builds on the excellent work of the LevelDB and Snappy teams from Google and additional contributors. LevelDB and Snappy are both issued under the New BSD Licence.