
json2
Yet another json library.
It's created to process unstructured json in a convenient and efficient way.
There is also some set of jq filters implemented on top of json2.Iterator.
json2 usage
Iterator is stateless.
Most of the methods take source buffer and index where to start parsing and return a result and index where they stopped parsing.
None of methods make a copy or allocate except these which take destination buffer in arguments.
The code is from examples.
var d json2.Iterator
data := []byte(`{"key": "value", "another": 1234}`)
i := 0
i, err := d.Enter(data, i, json2.Object)
if err != nil {
}
var key []byte
var value, another []byte
for d.ForMore(data, &i, json2.Object, &err) {
key, i, err = d.Key(data, i)
if err != nil {
}
switch string(key) {
case "key":
value, i, err = d.DecodeString(data, i, value[:0])
case "another":
another, i, err = d.Raw(data, i)
default:
i, err = d.Skip(data, i)
}
if err != nil {
}
}
if err != nil {
}
var err error
var d json2.Iterator
data := []byte(`"a", 2 3
["array"]
`)
for i := d.SkipSpaces(data, 0); i < len(data); i = d.SkipSpaces(data, i) {
i, err = processOneObject(data, i)
if err != nil {
}
}
jq usage
Deprecated in favour of (nikand.dev/go/jq)[https://pkg.go.dev/nikand.dev/go/jq].
The advantage of this implementation is that filters are stateless so they can be used by multiple goroutines at once.
The rest are disadvantages: more complicated code -> less reliable,
supports only json, less efficient, fewer filters implemented.
jq package is a set of Filters that take data from one buffer, process it, and append result to another buffer.
Also there is a state taken and returned.
It's used by filters to return multiple values one by one.
The caller must provide nil on the first iteration and returned state on the rest of iterations.
Iteration must stop when returned state is nil.
Filter may or may not add a value to dst buffer.
Empty filter for example adds no value and returns nil state.
Destination buffer is returned even in case of error.
This is mostly done for avoiding allocs in case the buffer was grown but error happened.
The code is from examples.
data := []byte(`{"key0":"skip it", "key1": {"next_key": ["array", null, {"obj":"val"}, "trailing element"]}} "next"`)
f := jq.Query{"key1", "next_key", 2}
var res []byte
var i int
res, i, _, err := f.Next(res[:0], data, i, nil)
if err != nil {
}
fmt.Printf("value: %s\n", res)
fmt.Printf("final position: %d of %d\n", i, len(data))
_ = i < len(data)
This is especially convenient if you need to extract a value from json inside base64 inside json.
Yes, I've seen such cases and this is how this library came to life.
data := []byte(`{"key1":"eyJrZXkyIjoie1wia2V5M1wiOlwidmFsdWVcIn0ifQ=="}`)
f := jq.NewPipe(
jq.Key("key1"),
&jq.Base64d{
Encoding: base64.StdEncoding,
},
&jq.JSONDecoder{},
jq.Key("key2"),
&jq.JSONDecoder{},
jq.Key("key3"),
)
res, _, _, err := f.Next(nil, data, 0, nil)
if err != nil {
panic(err)
}