![Build Status](https://travis-ci.org/thehowl/binary.svg?branch=master)
binary
A faster binary encoder.
go get howl.moe/binary
Migrating to version 2
All slice-related methods have been removed - this is because they allocated
their own slices (for reads). The way slices are implemented in binary protocols
is often arbitrary, some specify the length, some only in certain cases, some
use varints, others use different widths for the length. For this reason, we
removed the methods.
ByteSlice and String have been kept, since those are even used internally.
There is no way to read a string directly like there was in the previous
version - this is because of the aforementioned variability in implementation
about encoding the length. You can still use the newly-added Read
method
(implements io.Reader
) passing a byte slice with the length desired.
Why you should
The biggest, yet simplest, change was by removing all slice allocations. This is
a common trick used often, in places such as
fasthttp and
nanojson (yes, that's what we call shameless
advertising.)
Previously, every single tiny read and write allocated a byte slice. This is
actually quite expensive - it is a heap allocation, which needs to be tracked
by the garbage collector, and so on. Writes are now buffered in an internal
512-byte array, and by being able to encode binary data straight in that array
we reach big performance boosts.
$ git checkout v1
$ go test -bench=.
go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/thehowl/binary
BenchmarkWriteSmall-4 37062871 27.2 ns/op 1 B/op 1 allocs/op
BenchmarkWriteMedium-4 5283705 210 ns/op 40 B/op 5 allocs/op
BenchmarkWriteLong-4 849973 1417 ns/op 240 B/op 12 allocs/op
PASS
ok github.com/thehowl/binary 4.592s
$ git checkout master
$ go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: howl.moe/binary
BenchmarkWriteSmall-4 60495008 17.9 ns/op 0 B/op 0 allocs/op
BenchmarkWriteSmallEncodingBinary-4 40247256 29.3 ns/op 1 B/op 1 allocs/op
BenchmarkWriteMedium-4 19292994 52.6 ns/op 0 B/op 0 allocs/op
BenchmarkWriteLong-4 11028130 104 ns/op 0 B/op 0 allocs/op
BenchmarkWriteLongEncodingBinary-4 5126353 256 ns/op 96 B/op 3 allocs/op
PASS
ok howl.moe/binary 7.164s
As you can see, for writes of large chunks of data, this can be up to a 14x
improvement from the previous version. As it turns out, the allocations were
so expensive that the package was actually slower than encoding/binary. Not
anymore: you can now see that this package is 2x faster on long writes than
encoding/binary.