cbor
Encode and parse data in the Concise Binary Object Representation (CBOR) data format (RFC7049).
Supported Node.js versions
This project now only supports versions of Node that the Node team is
currently supporting.
Ava's support
statement
is what we will be using as well. Currently, that means Node 10
+ is
required. If you need to support an older version of Node (back to version
6), use cbor version 5.2.x, which will get nothing but security updates from
here on out.
Installation:
$ npm install --save cbor
$ npm install --save bignumber.js
NOTE
If you are going to use this on the web, use cbor-web instead.
Documentation:
See the full API documentation.
For a command-line interface, see cbor-cli.
Example:
var cbor = require('cbor');
var assert = require('assert');
var encoded = cbor.encode(true);
cbor.decodeFirst(encoded, function(error, obj) {
assert.ok(obj === true);
});
var m = new Map();
m.set(1, 2);
encoded = cbor.encode(m);
Allows streaming as well:
var cbor = require('cbor');
var fs = require('fs');
var d = new cbor.Decoder();
d.on('data', function(obj){
console.log(obj);
});
var s = fs.createReadStream('foo');
s.pipe(d);
var d2 = new cbor.Decoder({input: '00', encoding: 'hex'});
d.on('data', function(obj){
console.log(obj);
});
There is also support for synchronous decodes:
try {
console.log(cbor.decodeFirstSync('02'));
console.log(cbor.decodeAllSync('0202'));
} catch (e) {
}
The sync encoding and decoding are exported as a
leveldb encoding, as
cbor.leveldb
.
highWaterMark
The synchronous routines for encoding and decoding will have problems with
objects that are larger than 16kB, which the default buffer size for Node
streams. There are a few ways to fix this:
- pass in a
highWaterMark
option with the value of the largest buffer size you think you will need:
cbor.encodeOne(Buffer.alloc(40000), {highWaterMark: 65535})
- use stream mode. Catch the
data
, finish
, and error
events. Make sure to call end()
when you're done.
const enc = new cbor.Encoder()
enc.on('data', buf => )
enc.on('error', console.error)
enc.on('finish', () => )
enc.end(['foo', 1, false])
- use
encodeAsync()
, which uses the approach from approach 2 to return a memory-inefficient promise for a Buffer.
Supported types
The following types are supported for encoding:
- boolean
- number (including -0, NaN, and ±Infinity)
- string
- Array, Set (encoded as Array)
- Object (including null), Map
- undefined
- Buffer
- Date,
- RegExp
- URL
- TypedArrays, ArrayBuffer, DataView
- Map, Set
- BigInt
- bignumber (If you install it. It is needed for big float and big decimal types)
Decoding supports the above types, including the following CBOR tag numbers:
Tag | Generated Type |
---|
0 | Date |
1 | Date |
2 | BigInt |
3 | BigInt |
4 | bignumber |
5 | bignumber |
21 | Tagged, with toJSON |
22 | Tagged, with toJSON |
23 | Tagged, with toJSON |
32 | URL |
33 | Tagged |
34 | Tagged |
35 | RegExp |
64 | Uint8Array |
65 | Uint16Array |
66 | Uint32Array |
67 | BigUint64Array |
68 | Uint8ClampedArray |
69 | Uint16Array |
70 | Uint32Array |
71 | BigUint64Array |
72 | Int8Array |
73 | Int16Array |
74 | Int32Array |
75 | BigInt64Array |
77 | Int16Array |
78 | Int32Array |
79 | BigInt64Array |
81 | Float32Array |
82 | Float64Array |
85 | Float32Array |
86 | Float64Array |
258 | Set |
Adding new Encoders
There are several ways to add a new encoder:
encodeCBOR
method
This is the easiest approach, if you can modify the class being encoded. Add an
encodeCBOR
method to your class, which takes a single parameter of the encoder
currently being used. Your method should return true
on success, else false
.
Your method may call encoder.push(buffer)
or encoder.pushAny(any)
as needed.
For example:
class Foo {
constructor () {
this.one = 1
this.two = 2
}
encodeCBOR (encoder) {
const tagged = new Tagged(64000, [this.one, this.two])
return encoder.pushAny(tagged)
}
}
You can also modify an existing type by monkey-patching an encodeCBOR
function
onto its prototype, but this isn't recommended.
addSemanticType
Sometimes, you want to support an existing type without modification to that
type. In this case, call addSemanticType(type, encodeFunction)
on an existing
Encoder
instance. The encodeFunction
takes an encoder and an object to
encode, for example:
class Bar {
constructor () {
this.three = 3
}
}
const enc = new Encoder()
enc.addSemanticType(Bar, (encoder, b) => {
encoder.pushAny(b.three)
})
Adding new decoders
Most of the time, you will want to add support for decoding a new tag type. If
the Decoder class encounters a tag it doesn't support, it will generate a Tagged
instance that you can handle or ignore as needed. To have a specific type
generated instead, pass a tags
option to the Decoder
's constructor, consisting
of an object with tag number keys and function values. The function will be
passed the decoded value associated with the tag, and should return the decoded
value. For the Foo
example above, this might look like:
const d = new Decoder({tags: { 64000: (val) => {
const foo = new Foo()
foo.one = val[0]
foo.two = val[1]
return foo
}}})
You can also replace the default decoders by passing in an appropriate tag
function. For example:
cbor.decodeFirstSync(input, {
tags: {
0: x => Temporal.Instant.from(x)
}
})
Developers
The tests for this package use a set of test vectors from RFC 7049 appendix A
by importing a machine readable version of them from
https://github.com/cbor/test-vectors. For these tests to work, you will need
to use the command git submodule update --init
after cloning or pulling this
code. See https://gist.github.com/gitaarik/8735255#file-git_submodules-md
for more information.
Get a list of build steps with npm run
. I use npm run dev
, which rebuilds,
runs tests, and refreshes a browser window with coverage metrics every time I
save a .js
file. If you don't want to run the fuzz tests every time, set
a NO_GARBAGE
environment variable:
env NO_GARBAGE=1 npm run dev