Kiwi logger & context keeper

Beta version. API still is subject of changes. Use it carefully!

Kiwi /kiːwiː/ are birds native to New Zealand, in the genus Apteryx and family Apterygidae. They
are flightless, have hair-like feathers and smell like mushrooms. They look strange and funny so
when I wrote a logger for Go language I decided to devote it to this beast which I never seen in a
wild (I live very far from places where kiwis are live).
Kiwi Logger — this is a library with an odd logic that log your application data in its own strange way.
Well... kiwi
package is structured logger for key-value formats (with logfmt as default), it
highly customizable without hardcoded levels or nailed fields. It has dynamic filters that allow you
change you logs on the fly in two aspects: reduce number of records and restrict fields for each
record. It allows fine control on that should be logged in the moment and especially useful for
debugging. So kiwi
separates output logic (when and where you want write) from logging process
itself (just log anything what you want).
Features offered by structered logging and logfmt generally and by Kiwi particularly
- simple format with explicit key for each log message (logfmt like) for high readability by humans
- optional JSON format that liked by machines
- has no nailed levels, no hardcoded fields
- dynamic filtering of the output (change log verbosity and set of record fields on the fly)
- can keep context of the application
- has fast forking of subloggers with inherited context
- optional lazy evaluation of arguments for lowering logger footprint
- it fast enough and careful about memory allocs (indeed it faster than popular logrus and log15)
Kiwi logger has built around the idea:
Log everything in the code with as much details as possible. But actually write only that you need in the moment.
In the ideal world of course you could log everything and it is right way. Then you are look up
the central log storage that gathered logs from all the applications instances. You could view and
filter the records you are interested in the moment. But in the reality the most of the systems
developed locally and often you have no central log storage in development environment. And the
grep
utility is only the interface for logs filtering.
For example you are in debugging of the feature1
that spread across several modules of your
system. You are instrumenting the code with multiple log()
calls and set verbosity level to
DEBUG
. Then you have done with the feature and decrease you level to ERROR
. Then you are begin
to debug feature2
that spread even more across the modules of your system. And all repeat
again. You are set the level to DEBUG
and you are see the records both for the feature2
you are
need and for feature1
from the past. Welcome to grepping.
The "logfmt" format solves this problem with tags. The log records consists of arbitrary number of
key-value pairs. You can easily filter the only records you are need by the keys. The kiwi logger
allows you set filters dynamically in runtime.
Architecture?

Scared? ;) Ha-ha... Well, really it is not too weird as this picture looks :) Let me explain with more
clear and boring illustrations.

The logger instances (in different goroutines for example) write everything what you want to
log. Sinks gather the data from all the instances. Sink is the name for the output — it could be
a file or stdout or any other thing that realizes io.Writer()
. Filters for the sinks have rules to
pass only records you are really want for this output. For example you can dedicate the file for
errors come from module1
and another file for errors and warnings that come from module2
.
For example you can pass details of the record to a logfile for full debug. But write only
important information with an error message and status to stderr.
Recipe: export the handler or setup any kind of client for setting these filters in your app. Then
you get ability for dynamically change the flow and the verbosity of logs. For example increase
verbosity for a specific module or a single handler and decrease them for the rest of the
application.
Docs 
See documentation in the wiki. Examples of logger usage see
at cmd/* subfolders. And of course for API description look at
godoc.
Installation 
Package have no external dependencies except standard library. So just
go get github.com/grafov/kiwi
The library builds has been tested with go 1.8.
Usage examples
import "github.com/grafov/kiwi"
func main() {
log:=kiwi.New()
log.Log("msg", "something", "another key", "another value")
log.Log("key-only")
log.Add("where", "module1", "event", "something happened")
log.Add("event", "and now something completely different").Log()
log.Log("key", 123, "key2", 1.23e3, "key3", 'u', "key4", true)
out:=kiwi.SinkTo(os.StdOut, kiwi.Logfmt).Start()
out.WithKey("userID").WithValue("level", "FATAL")
out2 := kiwi.SinkTo(os.StdErr, kiwi.JSON).Start()
out2.WithInt64Range("userID", 100, 500).WithoutValue("label", "debug")
}
See more ready to run samples in cmd
subdirs. Filters described in the wiki:
Filtering.
The context records
Kiwi
logger allows you keep some pairs during lifetime of a logger instance.
import "github.com/grafov/kiwi"
func main() {
log1 := kiwi.New()
log1.With("userID", 1000, "PID", os.GetPID())
log1.Log("msg", "details about something")
log2 := log1.New()
log2.Log("key", "value")
appContext := log2.GetContext()
fmt.Printf("%+v\n", appContext)
log2.ResetContext()
}
Thread safety
It is unsafe by design. Firstly I have used version for safe work in multiple goroutines. And it
was not only slow but in just not need in many cases. If you need a new logger in another execution
thread you will create another instanse. Better is clone old instance to a new one for passing the
context to a subroutine. It is all.
log1 := kiwi.New().With("context key", "context value")
log2 := log1.New()
log2.With("another key", "another value")
go subroutine(log2, otherArgs...)
For the small apps where you won't init all these instances you would like use global kiwi.Log()
method.
This method just immediately flush it's args to the sinks. And by design it is safe for concurrent usage.
Also due design simplicity it not supports context, only regular values. If you need context then you
application is complex thing hence you will need initialize a new instance of kiwi.Logger().
Evaluating rules
- Keys and values evaluated immediately after they added to a record.
- Context values evaluated once when they added to a logger.
- For lazy evaluating of context and record values pass them as functions:
# For lazy evaluating you need function that returns string
func longActionForDelayedEvaluation() string {
return strconv.Itoa(result)
}
myLog.Add("lazy-sample", longActionForDelayedEvaluation) # but not longActionForDelayedEvaluation()
Logger accepts functions without args that returns a string: func () string
.
Hence value of lazy-sample
from the example above will be evaluated only on Log()
call.
Additional features in other packages
I try to keep the main logging package simple. Filters-formatters-sinks concept is core thing in
kiwi
tag filtering so it all placed in the single package. Alongside with basic formatters for
JSON and Logfmt. All other features I try to move to separate packages. The kiwi
repository have
the subpackages you could import:
- level — imitate traditional syslog-like levels (read more details below)
- timestamp — provide the logger instance with additional timestamp field
- strict — helper functions for providing more type control on your records
Warning about evil severity levels
The most of loggers came to you with concept of levels
. So you are filter anything but only
records of the preconigured level and levels above really appear in the log. The current level read
from the configuration of the application. There are loggers that allow you change the level in
runtime. The level here is like the key in logfmt. But logfmt became with more general idea: you can
arbitrary number of keys for the filtering. Not only predefined words for levels but any things like
feature
, module
etc. So you can filter not only by the severity but set general taxonomy of the
categories across all the parts (subsystems, modules) of your application.
Another problem with traditional syslog-like levels is non clear specification what exactly should
pass to each level. Look up the internet for many guides with controversial recommendations how to
distinguish all these "standard" levels and try map them to various events in your application. So
in the reality programmers often debate about level for the log record. For example when you should
use "fatal" instead of "critical" or use "debug" instead of "info". So very often severity levels
obstruct understanding of the logs.
You can use log syslog-like levels in kiwi
if you are comfortable with them (see helper functions
in level
subpackage) and interprete them as you wish. Severity levels in kiwi
don't play any
role in deciding how to output the record. Any records with any level will pass to all sinks.
Filters in each sink will decide how to actually display the record or filter it out completely.
Instead of FAQ
- Kiwi logger not strictly follows logfmt specs.
- Ideas of key-value format are like JSON format but with orientation on readability for humans first.
- Yes, it was architectured and developed to be a standard number 15 that competing with others. It is not pretend to be log format for everything.
- No, it is not related to
log15
logger though kiwi
shares the same logfmt format and some ideas with him.
- It did not offer "Logger" interface because IMO interface is useless for loggers. It offers interfaces for the parts of logger like formatters.
Similar works for structured logging
Comparison with other loggers
It is not the fastest logger among benchmarked but fast enough and careful about memory allocations.
It much faster than logrus
and log15
. But slower than logxi
tests. Need more detailed tests
though. See the benchmarks results at
github.com/grafov/go-loggers-comparison.
Roadmap
What should be done before the first release:
- test behaviour of all the filters and fix possible bugs
- test JSON output and fix possible bugs
realize API for custom filters
Future plans:
extend the predefined filters (cancelled)
- optional colour formatter for the console
- throttling mode for sinks
increase tests coverage up to 50%
- add tests for concurrent execution use cases
increase tests coverage up to 75%
- multiline output for values
- increase tests coverage up to 80%
See details about tasks and ideas in TODO.org
(orgmode format).
See current tests coverage.
Origins