Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
LevelUP is a Node.js library that provides a simple interface for interacting with LevelDB, a fast key-value storage library. It allows for efficient storage and retrieval of data, making it suitable for applications that require high-performance data operations.
Basic Put and Get
This feature allows you to store and retrieve key-value pairs in the database. The `put` method is used to store a value with a specific key, and the `get` method is used to retrieve the value associated with a key.
const level = require('level');
const db = level('./mydb');
// Put a key-value pair
await db.put('name', 'LevelUP');
// Get the value for a key
const value = await db.get('name');
console.log(value); // 'LevelUP'
Batch Operations
Batch operations allow you to perform multiple put and delete operations in a single atomic action. This is useful for making multiple changes to the database efficiently.
const level = require('level');
const db = level('./mydb');
// Perform batch operations
await db.batch()
.put('name', 'LevelUP')
.put('type', 'database')
.del('oldKey')
.write();
Streams
Streams provide a way to read and write data in a continuous flow. The `createReadStream` method allows you to read all key-value pairs in the database as a stream, which is useful for processing large datasets.
const level = require('level');
const db = level('./mydb');
// Create a read stream
const stream = db.createReadStream();
stream.on('data', ({ key, value }) => {
console.log(`${key} = ${value}`);
});
Sublevel
Sublevel allows you to create isolated sub-databases within a LevelDB instance. This is useful for organizing data into different namespaces.
const level = require('level');
const sublevel = require('subleveldown');
const db = level('./mydb');
const subdb = sublevel(db, 'sub');
// Put and get in sublevel
await subdb.put('name', 'SubLevelUP');
const value = await subdb.get('name');
console.log(value); // 'SubLevelUP'
LevelDOWN is a lower-level binding for LevelDB, providing a more direct interface to the LevelDB library. It is used as a backend for LevelUP but can be used independently for more fine-grained control over LevelDB operations.
RocksDB is a high-performance key-value store developed by Facebook. It is similar to LevelDB but offers additional features like column families and more tunable performance options. It can be used as an alternative to LevelDB for applications requiring higher performance.
Redis is an in-memory key-value store known for its speed and support for various data structures like strings, hashes, lists, sets, and more. Unlike LevelDB, Redis operates entirely in memory, making it suitable for use cases where low-latency access is critical.
LevelDB is a simple key/value data store built by Google, inspired by BigTable. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, forward and reverse iteration and simple compression using the Snappy algorithm which is optimised for speed over compression.
LevelUP aims to expose the features of LevelDB in a Node.js-friendly way. Both keys and values are treated as Buffer
objects and are automatically converted using a specified 'encoding'
. LevelDB's iterators are exposed as a Node.js style object-ReadStream
and writing can be peformed via an object-WriteStream
.
An important feature of LevelDB is that it stores entries sorted by keys. This makes LevelUP's ReadStream
interface is a very powerful way to look up items, particularly when combined with the start
option.
LevelUP is an OPEN Open Source Project, see the Contributing section to find out what this means.
See also a list of Node.js LevelDB modules and projects in the wiki.
Windows support is a work in progress; see issue #5 if you would like to help on that front.
All operations are asynchronous although they don't necessarily require a callback if you don't need to know when the operation was performed.
var levelup = require('levelup')
// 1) Create our database, supply location and options.
// This will create or open the underlying LevelDB store.
var db = levelup('./mydb')
// 2) put a key & value
db.put('name', 'LevelUP', function (err) {
if (err) return console.log('Ooops!', err) // some kind of I/O error
// 3) fetch by key
db.get('name', function (err, value) {
if (err) return console.log('Ooops!', err) // likely the key was not found
// ta da!
console.log('name=' + value)
})
})
levelup()
db.open()
db.close()
db.put()
db.get()
db.del()
db.batch()
db.approximateSize()
db.isOpen()
db.isClosed()
db.createReadStream()
db.createKeyStream()
db.createValueStream()
db.createWriteStream()
levelup()
is the main entry point for creating a new LevelUP instance and opening the underlying store with LevelDB.
This function returns a new instance of LevelUP and will also initiate an open()
operation. Opening the database is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form: function (err, db) {}
where the db
is the LevelUP instance. If you don't provide a callback, any read & write operations are simply queued internally until the database is fully opened.
This leads to two alternative ways of managing a new LevelUP instance:
levelup(location, options, function (err, db) {
if (err) throw err
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
})
// vs the equivalent:
var db = levelup(location, options) // will throw if an error occurs
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
The location
argument is available as a read-only property on the returned LevelUP instance.
options
levelup()
takes an optional options object as its second argument; the following properties are accepted:
'createIfMissing'
(boolean, default: true
): If true
, will initialise an empty database at the specified location if one doesn't already exist. If false
and a database doesn't exist you will receive an error in your open()
callback and your database won't open.
'errorIfExists'
(boolean, default: false
): If true
, you will receive an error in your open()
callback if the database exists at the specified location.
'compression'
(boolean, default: true
): If true
, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.
'cacheSize'
(number, default: 8 * 1024 * 1024
): The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.
'encoding'
(string, default: 'utf8'
): The encoding of the keys and values passed through Node.js' Buffer
implementation (see Buffer#toString())
'utf8'
is the default encoding for both keys and values so you can simply pass in strings and expect strings from your get()
operations. You can also pass Buffer
objects as keys and/or values and conversion will be performed.
Supported encodings are: hex, utf8, ascii, binary, base64, ucs2, utf16le.
'json'
encoding is also supported, see below.
'keyEncoding'
and 'valueEncoding'
(string, default: 'utf8'
): use instead of encoding
to specify the exact encoding of both the keys and the values in this database.
Additionally, each of the main interface methods accept an optional options object that can be used to override encoding
(or keyEncoding
& valueEncoding
).
open()
opens the underlying LevelDB store. In general you should never need to call this method directly as it's automatically called by levelup()
.
However, it is possible to reopen a database after it has been closed with close()
, although this is not generally advised.
close()
closes the underlying LevelDB store. The callback will receive any error encountered during closing as the first argument.
You should always clean up your LevelUP instance by calling close()
when you no longer need it to free up resources. A LevelDB store cannot be opened by multiple instances of LevelDB/LevelUP simultaneously.
put()
is the primary method for inserting data into the store. Both the key
and value
can be arbitrary data objects.
The callback argument is optional but if you don't provide one and an error occurs then expect the error to be thrown.
options
Encoding of the key
and value
objects will adhere to encoding
option(s) provided to levelup()
, although you can provide alternative encoding settings in the options for put()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
If you provide a 'sync'
value of true
in your options
object, LevelDB will perform a synchronous write of the data; although the operation will be asynchronous as far as Node is concerned. Normally, LevelDB passes the data to the operating system for writing and returns immediately, however a synchronous write will use fsync()
or equivalent so your callback won't be triggered until the data is actually on disk. Synchronous filesystem writes are significantly slower than asynchronous writes but if you want to be absolutely sure that the data is flushed then you can use 'sync': true
.
get()
is the primary method for fetching data from the store. The key
can be an arbitrary data object but if it doesn't exist in the store then the callback will receive an error as its first argument.
options
Encoding of the key
objects will adhere to encoding
option(s) provided to levelup()
, although you can provide alternative encoding settings in the options for get()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting fillCache
to false
.
del()
is the primary method for removing data from the store. The key
can be an arbitrary data object but if it doesn't exist in the store then the callback will receive an error as its first argument.
options
Encoding of the key
objects will adhere to encoding
option(s) provided to levelup()
, although you can provide alternative encoding settings in the options for del()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
A 'sync'
option can also be passed, see put()
for details on how this works.
batch()
can be used for very fast bulk-write operations (both put and delete). The array
argument should contain a list of operations to be executed sequentially. Each operation is contained in an object having the following properties: type
, key
, value
, where the type is either 'put'
or 'del'
. In the case of 'del'
the 'value'
property is ignored.
var ops = [
{ type: 'del', key: 'father' }
, { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
, { type: 'put', key: 'dob', value: '16 February 1941' }
, { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
, { type: 'put', key: 'occupation', value: 'Clown' }
]
db.batch(ops, function (err) {
if (err) return console.log('Ooops!', err)
console.log('Great success dear leader!')
})
options
See put()
for a discussion on the options
object. You can overwrite default key
and value
encodings and also specify the use of sync
filesystem operations.
approximateSize()
can used to get the approximate number of bytes of file system space used by the range [start..end)
. The result may not include recently written data.
db.approximateSize('a', 'c', function (err, size) {
if (err) return console.error('Ooops!', err)
console.log('Approximate size of range is %d', size)
})
A LevelUP object can be in one of the following states:
isOpen()
will return true
only when the state is "open".
See isOpen()
isClosed()
will return true
only when the state is "closing" or "closed", it can be useful for determining if read and write operations are permissible.
You can obtain a ReadStream of the full database by calling the createReadStream()
method. The resulting stream is a complete Node.js-style Readable Stream where 'data'
events emit objects with 'key'
and 'value'
pairs.
db.createReadStream()
.on('data', function (data) {
console.log(data.key, '=', data.value)
})
.on('error', function (err) {
console.log('Oh my!', err)
})
.on('close', function () {
console.log('Stream closed')
})
.on('end', function () {
console.log('Stream closed')
})
The standard pause()
, resume()
and destroy()
methods are implemented on the ReadStream, as is pipe()
(see below). 'data'
, 'error'
, 'end'
and 'close'
events are emitted.
Additionally, you can supply an options object as the first parameter to createReadStream()
with the following options:
'start'
: the key you wish to start the read at. By default it will start at the beginning of the store. Note that the start doesn't have to be an actual key that exists, LevelDB will simply find the next key, greater than the key you provide.
'end'
: the key you wish to end the read on. By default it will continue until the end of the store. Again, the end doesn't have to be an actual key as an (inclusive) <=
-type operation is performed to detect the end. You can also use the destroy()
method instead of supplying an 'end'
parameter to achieve the same effect.
'reverse'
(boolean, default: false
): a boolean, set to true if you want the stream to go in reverse order. Beware that due to the way LevelDB works, a reverse seek will be slower than a forward seek.
'keys'
(boolean, default: true
): whether the 'data'
event should contain keys. If set to true
and 'values'
set to false
then 'data'
events will simply be keys, rather than objects with a 'key'
property. Used internally by the createKeyStream()
method.
'values'
(boolean, default: true
): whether the 'data'
event should contain values. If set to true
and 'keys'
set to false
then 'data'
events will simply be values, rather than objects with a 'value'
property. Used internally by the createValueStream()
method.
'limit'
(number, default: -1
): limit the number of results collected by this stream. This number represents a maximum number of results and may not be reached if you get to the end of the store or your 'end'
value first. A value of -1
means there is no limit.
'fillCache'
(boolean, default: false
): wheather LevelDB's LRU-cache should be filled with data read.
A KeyStream is a ReadStream where the 'data'
events are simply the keys from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a KeyStream either by calling the createKeyStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with keys
set to true
and values
set to false
.
db.createKeyStream()
.on('data', function (data) {
console.log('key=', data)
})
// same as:
db.createReadStream({ keys: true, values: false })
.on('data', function (data) {
console.log('key=', data)
})
A ValueStream is a ReadStream where the 'data'
events are simply the values from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a ValueStream either by calling the createValueStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with values
set to true
and keys
set to false
.
db.createValueStream()
.on('data', function (data) {
console.log('value=', data)
})
// same as:
db.createReadStream({ keys: false, values: true })
.on('data', function (data) {
console.log('value=', data)
})
A WriteStream can be obtained by calling the createWriteStream()
method. The resulting stream is a complete Node.js-style Writable Stream which accepts objects with 'key'
and 'value'
pairs on its write()
method. The WriteStream will buffer writes and submit them as a batch()
operation where the writes occur on the same event loop tick, otherwise they are treated as simple put()
operations.
db.createWriteStream()
.on('error', function (err) {
console.log('Oh my!', err)
})
.on('close', function () {
console.log('Stream closed')
})
.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
.write({ key: 'dob', value: '16 February 1941' })
.write({ key: 'spouse', value: 'Kim Young-sook' })
.write({ key: 'occupation', value: 'Clown' })
.end()
The standard write()
, end()
, destroy()
and destroySoon()
methods are implemented on the WriteStream. 'drain'
, 'error'
, 'close'
and 'pipe'
events are emitted.
A ReadStream can be piped directly to a WriteStream, allowing for easy copying of an entire database. A simple copy()
operation is included in LevelUP that performs exactly this on two open databases:
function copy (srcdb, dstdb, callback) {
srcdb.createReadStream().pipe(dstdb.createWriteStream()).on('close', callback)
}
The ReadStream is also fstream-compatible which means you should be able to pipe to and from fstreams. So you can serialize and deserialize an entire database to a directory where keys are filenames and values are their contents, or even into a tar file using node-tar. See the fstream functional test for an example. (Note: I'm not really sure there's a great use-case for this but it's a fun example and it helps to harden the stream implementations.)
KeyStreams and ValueStreams can be treated like standard streams of raw data. If 'encoding'
is set to 'binary'
the 'data'
events will simply be standard Node Buffer
objects straight out of the data store.
LevelUP emits events when the callbacks to the corresponding methods are called.
db.emit('put', key, value)
emitted when a new value is 'put'
db.emit('del', key)
emitted when a value is deleteddb.emit('batch', ary)
emitted when a batch operation has executeddb.emit('ready')
emitted when the database has opened ('open'
is synonym)db.emit('closed')
emitted when the database has closeddb.emit('opening')
emitted when the database is openingdb.emit('closing')
emitted when the database is closingIf you do not pass a callback to an async function, and there is an error, LevelUP will emit('error', err)
instead.
You specify 'json'
encoding for both keys and/or values, you can then supply JavaScript objects to LevelUP and receive them from all fetch operations, including ReadStreams. LevelUP will automatically stringify your objects and store them as utf8 and parse the strings back into objects before passing them back to you.
LevelUP is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the CONTRIBUTING.md file for more details.
LevelUP is only possible due to the excellent work of the following contributors:
Rod Vagg | GitHub/rvagg | Twitter/@rvagg |
---|---|---|
John Chesley | GitHub/chesles | Twitter/@chesles |
Jake Verbaten | GitHub/raynos | Twitter/@raynos2 |
Dominic Tarr | GitHub/dominictarr | Twitter/@dominictarr |
Max Ogden | GitHub/maxogden | Twitter/@maxogden |
Lars-Magnus Skog | GitHub/ralphtheninja | Twitter/@ralphtheninja |
David Björklund | GitHub/kesla | Twitter/@david_bjorklund |
Julian Gruber | GitHub/juliangruber | Twitter/@juliangruber |
Paolo Fragomeni | GitHub/hij1nx | Twitter/@hij1nx |
Copyright (c) 2012-2013 LevelUP contributors (listed above).
LevelUP is licensed under an MIT +no-false-attribs license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details.
LevelUP builds on the excellent work of the LevelDB and Snappy teams from Google and additional contributors. LevelDB and Snappy are both issued under the New BSD Licence.
[0.6.0-rc1] - 2013-02-24
leveldown
project (@rvagg)leveldown@0.0.1
(@rvagg)FAQs
Fast & simple storage - a Node.js-style LevelDB wrapper
The npm package levelup receives a total of 826,360 weekly downloads. As such, levelup popularity was classified as popular.
We found that levelup demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.