Security News
Weekly Downloads Now Available in npm Package Search Results
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.
LevelUP is a Node.js library that provides a simple interface for interacting with LevelDB, a fast key-value storage library. It allows for efficient storage and retrieval of data, making it suitable for applications that require high-performance data operations.
Basic Put and Get
This feature allows you to store and retrieve key-value pairs in the database. The `put` method is used to store a value with a specific key, and the `get` method is used to retrieve the value associated with a key.
const level = require('level');
const db = level('./mydb');
// Put a key-value pair
await db.put('name', 'LevelUP');
// Get the value for a key
const value = await db.get('name');
console.log(value); // 'LevelUP'
Batch Operations
Batch operations allow you to perform multiple put and delete operations in a single atomic action. This is useful for making multiple changes to the database efficiently.
const level = require('level');
const db = level('./mydb');
// Perform batch operations
await db.batch()
.put('name', 'LevelUP')
.put('type', 'database')
.del('oldKey')
.write();
Streams
Streams provide a way to read and write data in a continuous flow. The `createReadStream` method allows you to read all key-value pairs in the database as a stream, which is useful for processing large datasets.
const level = require('level');
const db = level('./mydb');
// Create a read stream
const stream = db.createReadStream();
stream.on('data', ({ key, value }) => {
console.log(`${key} = ${value}`);
});
Sublevel
Sublevel allows you to create isolated sub-databases within a LevelDB instance. This is useful for organizing data into different namespaces.
const level = require('level');
const sublevel = require('subleveldown');
const db = level('./mydb');
const subdb = sublevel(db, 'sub');
// Put and get in sublevel
await subdb.put('name', 'SubLevelUP');
const value = await subdb.get('name');
console.log(value); // 'SubLevelUP'
LevelDOWN is a lower-level binding for LevelDB, providing a more direct interface to the LevelDB library. It is used as a backend for LevelUP but can be used independently for more fine-grained control over LevelDB operations.
RocksDB is a high-performance key-value store developed by Facebook. It is similar to LevelDB but offers additional features like column families and more tunable performance options. It can be used as an alternative to LevelDB for applications requiring higher performance.
Redis is an in-memory key-value store known for its speed and support for various data structures like strings, hashes, lists, sets, and more. Unlike LevelDB, Redis operates entirely in memory, making it suitable for use cases where low-latency access is critical.
LevelDB is a simple key/value data store built by Google, inspired by BigTable. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, bi-directional iterators and simple compression using the very fast Snappy algorithm.
LevelUP aims to expose the features of LevelDB in a Node.js-friendly way. All standard Buffer
encoding types are supported, as is a special JSON encoding. LevelDB's iterators are exposed as a Node.js-style readable stream a matching *writeable stream converts writes to batch operations.
LevelDB stores entries sorted lexicographically by keys. This makes LevelUP's ReadStream
interface is a very powerful query mechanism.
LevelUP is an OPEN Open Source Project, see the Contributing section to find out what this means.
LevelUP is designed to be backed by LevelDOWN which provides a pure C++ binding to LevelDB and can be used as a stand-along package if required.
As of version 0.9, LevelUP no longer requires LevelDOWN as a dependency so you must npm install leveldown
when you install LevelUP.
LevelDOWN is now optional because LevelUP can be used with alternative backends, such as level.js in the browser or MemDOWN for a pure in-memory store.
LevelUP will look for LevelDOWN and throw an error if it can't find it in its Node require()
path. It will also tell you if the installed version of LevelDOWN is incompatible.
The level package is available as an alternative installation mechanism. Install it instead to automatically get both LevelUP & LevelDOWN. It exposes LevelUP on its export (i.e. you can var leveldb = require('level')
).
First you need to install LevelUP!
$ npm install levelup leveldown
Or
$ npm install level
(this second option requires you to use LevelUP by calling var levelup = require('level')
)
All operations are asynchronous although they don't necessarily require a callback if you don't need to know when the operation was performed.
var levelup = require('levelup')
// 1) Create our database, supply location and options.
// This will create or open the underlying LevelDB store.
var db = levelup('./mydb')
// 2) put a key & value
db.put('name', 'LevelUP', function (err) {
if (err) return console.log('Ooops!', err) // some kind of I/O error
// 3) fetch by key
db.get('name', function (err, value) {
if (err) return console.log('Ooops!', err) // likely the key was not found
// ta da!
console.log('name=' + value)
})
})
levelup()
db.open()
db.close()
db.put()
db.get()
db.del()
db.batch()
(array form)db.batch()
(chained form)db.isOpen()
db.isClosed()
db.createReadStream()
db.createKeyStream()
db.createValueStream()
db.createWriteStream()
levelup()
is the main entry point for creating a new LevelUP instance and opening the underlying store with LevelDB.
This function returns a new instance of LevelUP and will also initiate an open()
operation. Opening the database is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form: function (err, db) {}
where the db
is the LevelUP instance. If you don't provide a callback, any read & write operations are simply queued internally until the database is fully opened.
This leads to two alternative ways of managing a new LevelUP instance:
levelup(location, options, function (err, db) {
if (err) throw err
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
})
// vs the equivalent:
var db = levelup(location, options) // will throw if an error occurs
db.get('foo', function (err, value) {
if (err) return console.log('foo does not exist')
console.log('got foo =', value)
})
The location
argument is available as a read-only property on the returned LevelUP instance.
options
levelup()
takes an optional options object as its second argument; the following properties are accepted:
'createIfMissing'
(boolean, default: true
): If true
, will initialise an empty database at the specified location if one doesn't already exist. If false
and a database doesn't exist you will receive an error in your open()
callback and your database won't open.
'errorIfExists'
(boolean, default: false
): If true
, you will receive an error in your open()
callback if the database exists at the specified location.
'compression'
(boolean, default: true
): If true
, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.
'cacheSize'
(number, default: 8 * 1024 * 1024
): The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.
'keyEncoding'
and 'valueEncoding'
(string, default: 'utf8'
): The encoding of the keys and values passed through Node.js' Buffer
implementation (see Buffer#toString()).
'utf8'
is the default encoding for both keys and values so you can simply pass in strings and expect strings from your get()
operations. You can also pass Buffer
objects as keys and/or values and conversion will be performed.
Supported encodings are: hex, utf8, ascii, binary, base64, ucs2, utf16le.
'json'
encoding is also supported, see below.
'db'
(object, default: LevelDOWN): LevelUP is backed by LevelDOWN to provide an interface to LevelDB. You can completely replace the use of LevelDOWN by providing a "factory" function that will return a LevelDOWN API compatible object given a location
argument. For further information, see MemDOWN, a fully LevelDOWN API compatible replacement that uses a memory store rather than LevelDB. Also see Abstract LevelDOWN, a partial implementation of the LevelDOWN API that can be used as a base prototype for a LevelDOWN substitute.
Additionally, each of the main interface methods accept an optional options object that can be used to override 'keyEncoding'
and 'valueEncoding'
.
open()
opens the underlying LevelDB store. In general you should never need to call this method directly as it's automatically called by levelup()
.
However, it is possible to reopen a database after it has been closed with close()
, although this is not generally advised.
close()
closes the underlying LevelDB store. The callback will receive any error encountered during closing as the first argument.
You should always clean up your LevelUP instance by calling close()
when you no longer need it to free up resources. A LevelDB store cannot be opened by multiple instances of LevelDB/LevelUP simultaneously.
put()
is the primary method for inserting data into the store. Both the key
and value
can be arbitrary data objects.
The callback argument is optional but if you don't provide one and an error occurs then expect the error to be thrown.
options
Encoding of the key
and value
objects will adhere to 'keyEncoding'
and 'valueEncoding'
options provided to levelup()
, although you can provide alternative encoding settings in the options for put()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
If you provide a 'sync'
value of true
in your options
object, LevelDB will perform a synchronous write of the data; although the operation will be asynchronous as far as Node is concerned. Normally, LevelDB passes the data to the operating system for writing and returns immediately, however a synchronous write will use fsync()
or equivalent so your callback won't be triggered until the data is actually on disk. Synchronous filesystem writes are significantly slower than asynchronous writes but if you want to be absolutely sure that the data is flushed then you can use 'sync': true
.
get()
is the primary method for fetching data from the store. The key
can be an arbitrary data object but if it doesn't exist in the store then the callback will receive an error as its first argument.
options
Encoding of the key
object will adhere to the 'keyEncoding'
option provided to levelup()
, although you can provide alternative encoding settings in the options for get()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting fillCache
to false
.
del()
is the primary method for removing data from the store.
options
Encoding of the key
object will adhere to the 'keyEncoding'
option provided to levelup()
, although you can provide alternative encoding settings in the options for del()
(it's recommended that you stay consistent in your encoding of keys and values in a single store).
A 'sync'
option can also be passed, see put()
for details on how this works.
batch()
can be used for very fast bulk-write operations (both put and delete). The array
argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside LevelDB. Each operation is contained in an object having the following properties: type
, key
, value
, where the type is either 'put'
or 'del'
. In the case of 'del'
the 'value'
property is ignored. Any entries with a 'key'
of null
or undefined
will cause an error to be returned on the callback
and any 'type': 'put'
entry with a 'value'
of null
or undefined
will return an error.
var ops = [
{ type: 'del', key: 'father' }
, { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
, { type: 'put', key: 'dob', value: '16 February 1941' }
, { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
, { type: 'put', key: 'occupation', value: 'Clown' }
]
db.batch(ops, function (err) {
if (err) return console.log('Ooops!', err)
console.log('Great success dear leader!')
})
options
See put()
for a discussion on the options
object. You can overwrite default 'keyEncoding'
and 'valueEncoding'
and also specify the use of sync
filesystem operations.
In addition to encoding options for the whole batch you can also overwrite the encoding per operation, like:
var ops = [
{
type: 'put',
key: new Buffer([1, 2, 3]),
value: { some: 'json' },
keyEncoding: 'binary',
valueEncoding: 'json'
}
]
batch()
, when called with no arguments will return a Batch
object which can be used to build, and eventually commit, an atomic LevelDB batch operation. Depending on how it's used, it is possible to obtain greater performance when using the chained form of batch()
over the array form.
db.batch()
.del('father')
.put('name', 'Yuri Irsenovich Kim')
.put('dob', '16 February 1941')
.put('spouse', 'Kim Young-sook')
.put('occupation', 'Clown')
.write(function () { console.log('Done!') })
batch.put(key, value[, options])
Queue a put operation on the current batch, not committed until a write()
is called on the batch.
The optional options
argument can be used to override the default 'keyEncoding'
and/or 'valueEncoding'
.
batch.del(key[, options])
Queue a del operation on the current batch, not committed until a write()
is called on the batch.
The optional options
argument can be used to override the default 'keyEncoding'
.
batch.clear()
Clear all queued operations on the current batch, any previous operations will be discarded.
batch.write([callback])
Commit the queued operations for this batch. All operations not cleared will be written to the database atomically, that is, they will either all succeed or fail with no partial commits. The optional callback
will be called when the operation has completed with an error argument if an error has occurred.
A LevelUP object can be in one of the following states:
isOpen()
will return true
only when the state is "open".
See isOpen()
isClosed()
will return true
only when the state is "closing" or "closed", it can be useful for determining if read and write operations are permissible.
You can obtain a ReadStream of the full database by calling the createReadStream()
method. The resulting stream is a complete Node.js-style Readable Stream where 'data'
events emit objects with 'key'
and 'value'
pairs.
db.createReadStream()
.on('data', function (data) {
console.log(data.key, '=', data.value)
})
.on('error', function (err) {
console.log('Oh my!', err)
})
.on('close', function () {
console.log('Stream closed')
})
.on('end', function () {
console.log('Stream closed')
})
The standard pause()
, resume()
and destroy()
methods are implemented on the ReadStream, as is pipe()
(see below). 'data'
, 'error'
, 'end'
and 'close'
events are emitted.
Additionally, you can supply an options object as the first parameter to createReadStream()
with the following options:
'start'
: the key you wish to start the read at. By default it will start at the beginning of the store. Note that the start doesn't have to be an actual key that exists, LevelDB will simply find the next key, greater than the key you provide.
'end'
: the key you wish to end the read on. By default it will continue until the end of the store. Again, the end doesn't have to be an actual key as an (inclusive) <=
-type operation is performed to detect the end. You can also use the destroy()
method instead of supplying an 'end'
parameter to achieve the same effect.
'reverse'
(boolean, default: false
): a boolean, set to true if you want the stream to go in reverse order. Beware that due to the way LevelDB works, a reverse seek will be slower than a forward seek.
'keys'
(boolean, default: true
): whether the 'data'
event should contain keys. If set to true
and 'values'
set to false
then 'data'
events will simply be keys, rather than objects with a 'key'
property. Used internally by the createKeyStream()
method.
'values'
(boolean, default: true
): whether the 'data'
event should contain values. If set to true
and 'keys'
set to false
then 'data'
events will simply be values, rather than objects with a 'value'
property. Used internally by the createValueStream()
method.
'limit'
(number, default: -1
): limit the number of results collected by this stream. This number represents a maximum number of results and may not be reached if you get to the end of the store or your 'end'
value first. A value of -1
means there is no limit.
'fillCache'
(boolean, default: false
): wheather LevelDB's LRU-cache should be filled with data read.
'keyEncoding'
/ 'valueEncoding'
(string): the encoding applied to each read piece of data.
A KeyStream is a ReadStream where the 'data'
events are simply the keys from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a KeyStream either by calling the createKeyStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with keys
set to true
and values
set to false
.
db.createKeyStream()
.on('data', function (data) {
console.log('key=', data)
})
// same as:
db.createReadStream({ keys: true, values: false })
.on('data', function (data) {
console.log('key=', data)
})
A ValueStream is a ReadStream where the 'data'
events are simply the values from the database so it can be used like a traditional stream rather than an object stream.
You can obtain a ValueStream either by calling the createValueStream()
method on a LevelUP object or by passing passing an options object to createReadStream()
with values
set to true
and keys
set to false
.
db.createValueStream()
.on('data', function (data) {
console.log('value=', data)
})
// same as:
db.createReadStream({ keys: false, values: true })
.on('data', function (data) {
console.log('value=', data)
})
A WriteStream can be obtained by calling the createWriteStream()
method. The resulting stream is a complete Node.js-style Writable Stream which accepts objects with 'key'
and 'value'
pairs on its write()
method.
The WriteStream will buffer writes and submit them as a batch()
operations where writes occur within the same tick.
var ws = db.createWriteStream()
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
ws.write({ key: 'dob', value: '16 February 1941' })
ws.write({ key: 'spouse', value: 'Kim Young-sook' })
ws.write({ key: 'occupation', value: 'Clown' })
ws.end()
The standard write()
, end()
, destroy()
and destroySoon()
methods are implemented on the WriteStream. 'drain'
, 'error'
, 'close'
and 'pipe'
events are emitted.
You can specify encodings both for the whole stream and individual entries:
To set the encoding for the whole stream, provide an options object as the first parameter to createWriteStream()
with 'keyEncoding'
and/or 'valueEncoding'
.
To set the encoding for an individual entry:
writeStream.write({
key: new Buffer([1, 2, 3]),
value: { some: 'json' },
keyEncoding: 'binary',
valueEncoding: 'json'
})
If individual write()
operations are performed with a 'type'
property of 'del'
, they will be passed on as 'del'
operations to the batch.
var ws = db.createWriteStream()
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ type: 'del', key: 'name' })
ws.write({ type: 'del', key: 'dob' })
ws.write({ type: 'put', key: 'spouse' })
ws.write({ type: 'del', key: 'occupation' })
ws.end()
If the WriteStream is created a 'type'
option of 'del'
, all write()
operations will be interpreted as 'del'
, unless explicitly specified as 'put'
.
var ws = db.createWriteStream({ type: 'del' })
ws.on('error', function (err) {
console.log('Oh my!', err)
})
ws.on('close', function () {
console.log('Stream closed')
})
ws.write({ key: 'name' })
ws.write({ key: 'dob' })
// but it can be overridden
ws.write({ type: 'put', key: 'spouse', value: 'Ri Sol-ju' })
ws.write({ key: 'occupation' })
ws.end()
A ReadStream can be piped directly to a WriteStream, allowing for easy copying of an entire database. A simple copy()
operation is included in LevelUP that performs exactly this on two open databases:
function copy (srcdb, dstdb, callback) {
srcdb.createReadStream().pipe(dstdb.createWriteStream()).on('close', callback)
}
The ReadStream is also fstream-compatible which means you should be able to pipe to and from fstreams. So you can serialize and deserialize an entire database to a directory where keys are filenames and values are their contents, or even into a tar file using node-tar. See the fstream functional test for an example. (Note: I'm not really sure there's a great use-case for this but it's a fun example and it helps to harden the stream implementations.)
KeyStreams and ValueStreams can be treated like standard streams of raw data. If 'keyEncoding'
or 'valueEncoding'
is set to 'binary'
the 'data'
events will simply be standard Node Buffer
objects straight out of the data store.
approximateSize()
can used to get the approximate number of bytes of file system space used by the range [start..end)
. The result may not include recently written data.
var db = require('level')('./huge.db')
db.db.approximateSize('a', 'c', function (err, size) {
if (err) return console.error('Ooops!', err)
console.log('Approximate size of range is %d', size)
})
Note: approximateSize()
is available via LevelDOWN, which by default is accessible as the db
property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db'
option.
getProperty
can be used to get internal details from LevelDB. When issued with a valid property string, a readable string will be returned (this method is synchronous).
Currently, the only valid properties are:
'leveldb.num-files-at-levelN'
: returns the number of files at level N, where N is an integer representing a valid level (e.g. "0").
'leveldb.stats'
: returns a multi-line string describing statistics about LevelDB's internal operation.
'leveldb.sstables'
: returns a multi-line string describing all of the sstables that make up contents of the current database.
var db = require('level')('./huge.db')
console.log(db.db.getProperty('leveldb.num-files-at-level3'))
// → '243'
Note: getProperty()
is available via LevelDOWN, which by default is accessible as the db
property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db'
option.
destroy()
is used to completely remove an existing LevelDB database directory. You can use this function in place of a full directory rm if you want to be sure to only remove LevelDB-related files. If the directory only contains LevelDB files, the directory itself will be removed as well. If there are additional, non-LevelDB files in the directory, those files, and the directory, will be left alone.
The callback will be called when the destroy operation is complete, with a possible error
argument.
Note: destroy()
is available via LevelDOWN which you will have to have available to require()
, e.g.:
require('leveldown').destroy('./huge.db', function () { console.log('done!') })
repair()
can be used to attempt a restoration of a damaged LevelDB store. From the LevelDB documentation:
If a DB cannot be opened, you may attempt to call this method to resurrect as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function on a database that contains important information.
You will find information on the repair operation in the LOG file inside the store directory.
A repair()
can also be used to perform a compaction of the LevelDB log into table files.
The callback will be called when the repair operation is complete, with a possible error
argument.
Note: destroy()
is available via LevelDOWN which you will have to have available to require()
, e.g.:
require('leveldown').repair('./huge.db', function () { console.log('done!') })
LevelUP emits events when the callbacks to the corresponding methods are called.
db.emit('put', key, value)
emitted when a new value is 'put'
db.emit('del', key)
emitted when a value is deleteddb.emit('batch', ary)
emitted when a batch operation has executeddb.emit('ready')
emitted when the database has opened ('open'
is synonym)db.emit('closed')
emitted when the database has closeddb.emit('opening')
emitted when the database is openingdb.emit('closing')
emitted when the database is closingIf you do not pass a callback to an async function, and there is an error, LevelUP will emit('error', err)
instead.
You specify 'json'
encoding for both keys and/or values, you can then supply JavaScript objects to LevelUP and receive them from all fetch operations, including ReadStreams. LevelUP will automatically stringify your objects and store them as utf8 and parse the strings back into objects before passing them back to you.
A list of Node.js LevelDB modules and projects can be found in the wiki.
When attempting to extend the functionality of LevelUP, it is recommended that you consider using level-hooks and/or level-sublevel. level-sublevel is particularly helpful for keeping additional, extension-specific, data in a LevelDB store. It allows you to partition a LevelUP instance into multiple sub-instances that each correspond to discrete namespaced key ranges.
LevelDB is thread-safe but is not suitable for accessing with multiple processes. You should only ever have a LevelDB database open from a single Node.js process. Node.js clusters are made up of multiple processes so a LevelUP instance cannot be shared between them either.
See the wiki for some LevelUP extensions, including multilevel, that may help if you require a single data store to be shared across processes.
There are multiple ways you can find help in using LevelDB in Node.js:
LevelUP is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the CONTRIBUTING.md file for more details.
LevelUP is only possible due to the excellent work of the following contributors:
Rod Vagg | GitHub/rvagg | Twitter/@rvagg |
---|---|---|
John Chesley | GitHub/chesles | Twitter/@chesles |
Jake Verbaten | GitHub/raynos | Twitter/@raynos2 |
Dominic Tarr | GitHub/dominictarr | Twitter/@dominictarr |
Max Ogden | GitHub/maxogden | Twitter/@maxogden |
Lars-Magnus Skog | GitHub/ralphtheninja | Twitter/@ralphtheninja |
David Björklund | GitHub/kesla | Twitter/@david_bjorklund |
Julian Gruber | GitHub/juliangruber | Twitter/@juliangruber |
Paolo Fragomeni | GitHub/hij1nx | Twitter/@hij1nx |
Anton Whalley | GitHub/No9 | Twitter/@antonwhalley |
Matteo Collina | GitHub/mcollina | Twitter/@matteocollina |
Pedro Teixeira | GitHub/pgte | Twitter/@pgte |
A large portion of the Windows support comes from code by Krzysztof Kowalczyk @kjk, see his Windows LevelDB port here. If you're using LevelUP on Windows, you should give him your thanks!
Copyright (c) 2012-2013 LevelUP contributors (listed above).
LevelUP is licensed under an MIT +no-false-attribs license. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details.
LevelUP builds on the excellent work of the LevelDB and Snappy teams from Google and additional contributors. LevelDB and Snappy are both issued under the New BSD Licence.
[0.13.0] - 2013-08-11
FAQs
Fast & simple storage - a Node.js-style LevelDB wrapper
The npm package levelup receives a total of 762,275 weekly downloads. As such, levelup popularity was classified as popular.
We found that levelup demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.
Security News
A Stanford study reveals 9.5% of engineers contribute almost nothing, costing tech $90B annually, with remote work fueling the rise of "ghost engineers."
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.