What is cbor?
The 'cbor' npm package is used for encoding and decoding data in the Concise Binary Object Representation (CBOR) format. CBOR is a binary data serialization format that is designed to be small, fast, and suitable for constrained environments. It is often used in IoT, web, and mobile applications where efficiency is critical.
What are cbor's main functionalities?
Encoding Data
This feature allows you to encode JavaScript objects into CBOR format. The example demonstrates encoding a simple object asynchronously.
const cbor = require('cbor');
const data = { key: 'value' };
cbor.encodeAsync(data).then(encoded => {
console.log(encoded);
});
Decoding Data
This feature allows you to decode CBOR data back into JavaScript objects. The example shows how to decode a CBOR-encoded buffer.
const cbor = require('cbor');
const encodedData = Buffer.from('a2616b65796576616c7565', 'hex');
cbor.decodeFirst(encodedData).then(decoded => {
console.log(decoded);
});
Streaming Encoding
This feature supports streaming encoding, which is useful for handling large datasets or continuous data streams. The example demonstrates encoding data in a stream.
const cbor = require('cbor');
const stream = cbor.encodeStream();
stream.on('data', chunk => {
console.log(chunk);
});
stream.write({ key: 'value' });
stream.end();
Streaming Decoding
This feature supports streaming decoding, which is useful for processing large or continuous CBOR data streams. The example shows how to decode data from a stream.
const cbor = require('cbor');
const stream = cbor.decodeStream();
stream.on('data', obj => {
console.log(obj);
});
stream.write(Buffer.from('a2616b65796576616c7565', 'hex'));
stream.end();
Other packages similar to cbor
msgpack5
The 'msgpack5' package provides MessagePack encoding and decoding, which is another binary serialization format similar to CBOR. MessagePack is also designed to be efficient and compact, making it suitable for similar use cases. Compared to CBOR, MessagePack has a slightly different data model and may offer different performance characteristics depending on the use case.
protobufjs
The 'protobufjs' package is used for encoding and decoding data in Protocol Buffers (protobuf) format, which is a language-neutral, platform-neutral, extensible mechanism for serializing structured data. Protobuf is often used in communication protocols and data storage. It is more complex and feature-rich compared to CBOR, offering schema definitions and more control over data serialization.
avsc
The 'avsc' package provides support for Apache Avro, a binary serialization format that includes rich data structures and a compact, fast binary encoding. Avro is often used in big data applications and supports schema evolution. Compared to CBOR, Avro is more focused on data interchange and storage in distributed systems.
cbor
Encode and parse data in the Concise Binary Object Representation (CBOR) data format (RFC7049).
Installation:
$ npm install --save cbor
NOTE
This package now requires node.js 4.1 or higher. If you want a version that
works with older node.js versions, you can install like this:
npm install 'hildjj/node-cbor#node0' --save
Documentation:
See the full API documentation.
From the command line:
$ bin/json2cbor package.json > package.cbor
$ bin/cbor2json package.cbor
$ bin/cbor2diag package.cbor
Example:
var cbor = require('cbor');
var assert = require('assert');
var encoded = cbor.encode(true);
cbor.decodeFirst(encoded, function(error, obj) {
assert.ok(obj === true);
});
var m = new Map();
m.set(1, 2);
encoded = cbor.encode(m);
Allows streaming as well:
var cbor = require('cbor');
var fs = require('fs');
var d = new cbor.Decoder();
d.on('data', function(obj){
console.log(obj);
});
var s = fs.createReadStream('foo');
s.pipe(d);
var d2 = new cbor.Decoder({input: '00', encoding: 'hex'});
d.on('data', function(obj){
console.log(obj);
});
There is also support for synchronous decodes:
try {
console.log(cbor.decodeFirstSync('02'));
console.log(cbor.decodeAllSync('0202'));
} catch (e) {
}
The sync encoding and decoding are exported as a
leveldb encoding, as
cbor.leveldb
.
Supported types
The following types are supported for encoding:
- boolean
- number (including -0, NaN, and ±Infinity)
- string
- Array, Set (encoded as Array)
- Object (including null), Map
- undefined
- Buffer
- Date,
- RegExp
- url.URL
- bignumber
Decoding supports the above types, including the following CBOR tag numbers:
Tag | Generated Type |
---|
0 | Date |
1 | Date |
2 | bignumber |
3 | bignumber |
4 | bignumber |
5 | bignumber |
32 | url.URL |
35 | RegExp |
Adding new Encoders
There are several ways to add a new encoder:
encodeCBOR
method
This is the easiest approach, if you can modify the class being encoded. Add an
encodeCBOR
method to your class, which takes a single parameter of the encoder
currently being used. Your method should return true
on success, else false
.
Your method may call encoder.push(buffer)
or encoder.pushAny(any)
as needed.
For example:
class Foo {
constructor () {
this.one = 1
this.two = 2
}
encodeCBOR (encoder) {
const tagged = new Tagged(64000, [this.one, this.two])
return encoder.pushAny(tagged)
}
}
You can also modify an existing type by monkey-patching an encodeCBOR
function
onto its prototype, but this isn't recommended.
addSemanticType
Sometimes, you want to support an existing type without modification to that
type. In this case, call addSemanticType(type, encodeFunction)
on an existing
Encoder
instance. The encodeFunction
takes an encoder and an object to
encode, for example:
class Bar {
constructor () {
this.three = 3
}
}
const enc = new Encoder()
enc.addSemanticType(Bar, (encoder, b) => {
encoder.pushAny(b.three)
})
Adding new decoders
Most of the time, you will want to add support for decoding a new tag type. If
the Decoder class encounters a tag it doesn't support, it will generate a Tagged
instance that you can handle or ignore as needed. To have a specific type
generated instead, pass a tags
option to the Decoder
's constructor, consisting
of an object with tag number keys and function values. The function will be
passed the decoded value associated with the tag, and should return the decoded
value. For the Foo
example above, this might look like:
const d = new Decoder({tags: { 64000: (val) => {
const foo = new Foo()
foo.one = val[0]
foo.two = val[1]
return foo
}}})
Developers
The tests for this package use a set of test vectors from RFC 7049 appendix A by importing a machine readable version of them from https://github.com/cbor/test-vectors. For these tests to work, you will need to use the command git submodule update --init
after cloning or pulling this code. See https://gist.github.com/gitaarik/8735255#file-git_submodules-md for more information.
Get a list of build steps with npm run
. I use npm run dev
, which rebuilds,
runs tests, and refreshes a browser window with coverage metrics every time I
save a .coffee
file. If you don't want to run the fuzz tests every time, set
a NO_GARBAGE
environment variable:
env NO_GARBAGE=1 npm run dev