Package service_decorators is to simplify your work on building microservices. The common functions for the microservices (such as, Circuit break, Rate limit, Metric...) have be encapsulated in the reusable components(decorators). To build a service is to decorate the core business logic with the common decorators, so you can only focus on the core business logic. @Auth chaocai2001@icloud.com @Created on 2018-6
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package uitable provides a decorator for formating data as a table
Package dst declares the types used to represent decorated syntax trees for Go packages.
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package log provides a structured logger. Structured logging produces logs easily consumed later by humans or machines. Humans might be interested in debugging errors, or tracing specific requests. Machines might be interested in counting interesting events, or aggregating information for off-line processing. In both cases, it is important that the log messages are structured and actionable. Package log is designed to encourage both of these best practices. The fundamental interface is Logger. Loggers create log events from key/value data. The Logger interface has a single method, Log, which accepts a sequence of alternating key/value pairs, which this package names keyvals. Here is an example of a function using a Logger to create log events. The keys in the above example are "taskID" and "event". The values are task.ID, "starting task", and "task complete". Every key is followed immediately by its value. Keys are usually plain strings. Values may be any type that has a sensible encoding in the chosen log format. With structured logging it is a good idea to log simple values without formatting them. This practice allows the chosen logger to encode values in the most appropriate way. A contextual logger stores keyvals that it includes in all log events. Building appropriate contextual loggers reduces repetition and aids consistency in the resulting log output. With, WithPrefix, and WithSuffix add context to a logger. We can use With to improve the RunTask example. The improved version emits the same log events as the original for the first and last calls to Log. Passing the contextual logger to taskHelper enables each log event created by taskHelper to include the task.ID even though taskHelper does not have access to that value. Using contextual loggers this way simplifies producing log output that enables tracing the life cycle of individual tasks. (See the Contextual example for the full code of the above snippet.) A Valuer function stored in a contextual logger generates a new value each time an event is logged. The Valuer example demonstrates how this feature works. Valuers provide the basis for consistently logging timestamps and source code location. The log package defines several valuers for that purpose. See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and DefaultCaller. A common logger initialization sequence that ensures all log entries contain a timestamp and source location looks like this: Applications with multiple goroutines want each log event written to the same logger to remain separate from other log events. Package log provides two simple solutions for concurrent safe logging. NewSyncWriter wraps an io.Writer and serializes each call to its Write method. Using a SyncWriter has the benefit that the smallest practical portion of the logging logic is performed within a mutex, but it requires the formatting Logger to make only one call to Write per log event. NewSyncLogger wraps any Logger and serializes each call to its Log method. Using a SyncLogger has the benefit that it guarantees each log event is handled atomically within the wrapped logger, but it typically serializes both the formatting and output logic. Use a SyncLogger if the formatting logger may perform multiple writes per log event. This package relies on the practice of wrapping or decorating loggers with other loggers to provide composable pieces of functionality. It also means that Logger.Log must return an error because some implementations—especially those that output log data to an io.Writer—may encounter errors that cannot be handled locally. This in turn means that Loggers that wrap other loggers should return errors from the wrapped logger up the stack. Fortunately, the decorator pattern also provides a way to avoid the necessity to check for errors every time an application calls Logger.Log. An application required to panic whenever its Logger encounters an error could initialize its logger as follows.
Package nrlogrusplugin decorates logs for sending to the New Relic backend. Use this package if you already send your logs to New Relic and want to enable linking between your APM events and traces with your logs. Since Logrus is completely api-compatible with the stdlib logger, you can replace your `"log"` imports with `log "github.com/sirupsen/logrus"` and follow the steps below to enable the logging product for use with the stdlib Go logger. Using `logger.WithField` (https://godoc.org/github.com/sirupsen/logrus#Logger.WithField) and `logger.WithFields` (https://godoc.org/github.com/sirupsen/logrus#Logger.WithFields) is supported. However, if the field key collides with one of the keys used by the New Relic Formatter, the value will be overwritten. Reserved keys are those found in the `logcontext` package (https://godoc.org/github.com/newrelic/go-agent/v3/integrations/logcontext/#pkg-constants). Supported types for `logger.WithField` and `logger.WithFields` field values are numbers, booleans, strings, and errors. Func types are dropped and all other types are converted to strings. Requires v1.4.0 of the Logrus package or newer. For the best linking experience be sure to enable Distributed Tracing: To enable log decoration, set your log's formatter to the `nrlogrusplugin.ContextFormatter` or if you are using the logrus standard logger The logger will now look for a newrelic.Transaction inside its context and decorate logs accordingly. Therefore, the Transaction must be added to the context and passed to the logger. For example, this logging call must be transformed to include the context, such as: When properly configured, your log statements will be in JSON format with one message per line: If the `trace.id` key is missing, be sure that Distributed Tracing is enabled and that the Transaction context has been added to the logger using `WithContext` (https://godoc.org/github.com/sirupsen/logrus#Logger.WithContext).
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package uitable provides a decorator for formating data as a table
Package nject is a general purpose dependency injection framework. It provides wrapping, pruning, and indirect variable passing. It is type safe and using it requires no type assertions. There are two main injection APIs: Run and Bind. Bind is designed to be used at program initialization and does as much work as possible then rather than during main execution. The API for nject is a list of providers (injectors) that are run in order. The final function in the list must be called. The other functions are called if their value is consumed by a later function that must be called. Here is a simple example: In this example, context.Background and log.Default are not invoked because their outputs are not used by the final function (http.ListenAndServe). The basic idea of nject is to assemble a Collection of providers and then use that collection to supply inputs for functions that may use some or all of the provided types. One big win from dependency injection with nject is the ability to reshape various different functions into a single signature. For example, having a bunch of functions with different APIs all bound as http.HandlerFunc is easy. Providers produce or consume data. The data is distinguished by its type. If you want to three different strings, then define three different types: Then you can have a function that does things with the three types: The above function would be a valid injector or final function in a provider Collection. For example: This creates a sequence and executes it. Run injects a myFirst value and the sequence of providers runs: genSecond() injects a mySecond and myStringFunc() combines the myFirst and mySecond to create a myThird. Then the function given in run saves that final value. The expected output is Providers are grouped as into linear sequences. When building an injection chain, the providers are grouped into several sets: LITERAL, STATIC, RUN. The LITERAL and STATIC sets run once per initialization. The RUN set runs once per invocation. Providers within a set are executed in the order that they were originally specified. Providers whose outputs are not consumed are omitted unless they are marked Required(). Collections are bound with Bind(&invocationFunction, &initializationFunction). The invocationFunction is expected to be used over and over, but the initializationFunction is expected to be used less frequently. The STATIC set is re-invoked each time the initialization function is run. The LITERAL set is just the literal values in the collection. The STATIC set is composed of the cacheable injectors. The RUN set if everything else. All injectors have the following type signature: None of the input or output parameters may be anonymously-typed functions. An anoymously-typed function is a function without a named type. Injectors whose output values are not used by a downstream handler are dropped from the handler chain. They are not invoked. Injectors that have no output values are a special case and they are always retained in the handler chain. In injector that is annotated as Cacheable() may promoted to the STATIC set. An injector that is annotated as MustCache() must be promoted to the STATIC set: if it cannot be promoted then the collection is deemed invalid. An injector may not be promoted to the STATIC set if it takes as input data that comes from a provider that is not in the STATIC or LITERAL sets. For example, arguments to the invocation function, if the invoke function takes an int as one of its inputs, then no injector that takes an int as an argument may be promoted to the STATIC set. Injectors in the STATIC set will be run exactly once per set of input values. If the inputs are consistent, then the output will be a singleton. This is true across injection chains. If the following provider is used in multiple chains, as long as the same integer is injected, all chains will share the same pointer. Injectors in the STATIC set are only run for initialization. For some things, like opening a database, that may still be too often. Injectors that are marked Memoized must be promoted to the static set. Memoized injectors are only run once per combination of inputs. Their outputs are remembered. If called enough times with different arguments, memory will be exhausted. Memoized injectors may not have more than 90 inputs. Memoized injectors may not have any inputs that are go maps, slices, or functions. Arrays, structs, and interfaces are okay. This requirement is recursive so a struct that that has a slice in it is not okay. Fallible injectors are special injectors that change the behavior of the injection chain if they return error. Fallible injectors in the RUN set, that return error will terminate execution of the injection chain. A non-wrapper function that returns nject.TerminalError is a fallible injector. The TerminalError does not have to be the last return value. The nject package converts TerminalError objects into error objects so only the fallible injector should use TerminalError. Anything that consumes the TerminalError should do so by consuming error instead. Fallible injectors can be in both the STATIC set and the RUN set. Their behavior is a bit different. If a non-nil value is returned as the TerminalError from a fallible injector in the RUN set, none of the downstream providers will be called. The provider chain returns from that point with the TerminalError as a return value. Since all return values must be consumed by a middleware provider or the bound invoke function, fallible injectors must come downstream from a middleware handler that takes error as a returned value if the invoke function (function that runs a bound injection chain) does not return error. If a fallible injector returns nil for the TerminalError, the other output values are made available for downstream handlers to consume. The other output values are not considered return values and are not available to be consumed by upstream middleware handlers. The error returned by a fallible injector is not available downstream. If a non-nil value is returned as the TerminalError from a fallible injector in the STATIC set, the rest of the STATIC set will be skipped. If there is an init function and it returns error, then the value returned by the fallible injector will be returned via init function. Unlike fallible injectors in the RUN set, the error output by a fallible injector in the STATIC set is available downstream (but only in the RUN set -- nothing else in the STATIC set will execute). Some examples: A wrap function interrupts the linear sequence of providers. It may or may invoke the remainder of the sequence that comes after it. The remainder of the sequence is provided to the wrap function as a function that it may call. The type signature of a wrap function is a function that receives an function as its first parameter. That function must be of an anonymous type: For example: When this wrappper function runs, it is responsible for invoking the rest of the provider chain. It does this by calling inner(). The parameters to inner are available as inputs to downstream providers. The value(s) returned by inner come from the return values of other wrapper functions and from the return value(s) of the final function. Wrap functions can call inner() zero or more times. The values returned by wrap functions must be consumed by another upstream wrap function or by the init function (if using Bind()). Wrap functions have a small amount of runtime overhead compared to other kinds of functions: one call to reflect.MakeFunc(). Wrap functions serve the same role as middleware, but are usually easier to write. Wrap functions that invoke inner() multiple times in parallel are are not well supported at this time and such invocations must have the wrap function decorated with Parallel(). Final functions are simply the last provider in the chain. They look like regular Go functions. Their input parameters come from other providers. Their return values (if any) must be consumed by an upstream wrapper function or by the init function (if using Bind()). Wrap functions that return error should take error as a returned value so that they do not mask a downstream error. Wrap functions should not return TerminalError because they internally control if the downstream chain is called. Literal values are values in the provider chain that are not functions. Provider chains can be invalid for many reasons: inputs of a type not provided earlier in the chain; annotations that cannot be honored (eg. MustCache & Memoize); return values that are not consumed; functions that take or return functions with an anymous type other than wrapper functions; A chain that does not terminate with a function; etc. Bind() and Run() will return error when presented with an invalid provider chain. Bind() and Run() will return error rather than panic. After Bind()ing an init and invoke function, calling them will not panic unless a provider panic()s A wrapper function can be used to catch panics and turn them into errors. When doing that, it is important to propagate any errors that are coming up the chain. If there is no guaranteed function that will return error, one can be added with Shun(). Bind() uses a complex and somewhat expensive O(n^2) set of rules to evaluate which providers should be included in a chain and which can be dropped. The goal is to keep the ones you want and remove the ones you don't want. Bind() tries to figure this out based on the dependencies and the annotations. MustConsume, not Desired: Only include if at least one output is transitively consumed by a Required or Desired chain element and all outputs are consumed by some other provider. Not MustConsume, not Desired: only include if at least one output is transitively consumed by a Required or Desired provider. Not MustConsume, Desired: Include if all inputs are available. MustConsume, Desired: Only include if all outputs are transitively consumed by a required or Desired chain element. When there are multiple providers of a type, Bind() tries to get it from the closest provider. Providers that have unmet dependencies will be eliminated from the chain unless they're Required. The remainder of this document consists of suggestions for how to use nject. Contributions to this section would be welcome. Also links to blogs or other discussions of using nject in practice. The best practice for using nject inside a large project is to have a few common chains that everyone imports. Most of the time, these common chains will be early in the sequence of providers. Customization of the import chains happens in many places. This is true for services, libraries, and tests. For tests, a wrapper that includes the standard chain makes it easier to write tests. See github.com/memsql/ntest for helper functions and more examples. If nject cannot bind or run a chain, it will return error. The returned error is generally very good, but it does not contain the full debugging output. The full debugging output can be obtained with the DetailedError function. If the detailed error shows that nject has a bug, note that part of the debug output includes a regression test that can be turned into an nject issue. Remove the comments to hide the original type names. The Reorder() decorator allows injection chains to be fully or partially reordered. Reorder is currently limited to a single pass and does not know which injectors are ultimately going to be included in the final chain. It is likely that if you mark your entire chain with Reorder, you'll have unexpected results. On the other hand, Reorder provides safe and easy way to solve some common problems. For example: providing optional options to an injected dependency. Because the default options are marked as Shun, they'll only be included if they have to be included. If a user of thingChain wants to override the options, they simply need to mark their override as Reorder. To make this extra friendly, a helper function to do the override can be provided and used. Recommended best practice is to have injectors shutdown the things they themselves start. They should do their own cleanup. Inside tests, an injector can use t.Cleanup() for this. For services, something like t.Cleanup can easily be built: Alternatively, any wrapper function can do it's own cleanup in a defer that it defines. Wrapper functions have a small runtime performance penalty, so if you have more than a couple of providers that need cleanup, it makes sense to include something like CleaningService. The normal direction of forced inclusion is that an upstream provider is required because a downstream provider uses a type produced by the upstream provider. There are times when the relationship needs to be reversed. For example, a type gets modified by a downstream injector. The simplest option is to combine the providers into one function. Another possibility is to mark the upstream provider with MustConsume and have it produce a type that is only consumed by the downstream provider. Lastly, the providers can be grouped with Cluster so that they'll be included or excluded as a group. Example shows what gets included and what does not for several injection chains. These examples are meant to show the subtlety of what gets included and why. This example explores injecting a database handle or transaction only when they're used.
Package garif defines all the GO structures required to model a SARIF log file. These structures were created using the JSON-schema sarif-schema-2.1.0.json of SARIF logfiles available at https://github.com/oasis-tcs/sarif-spec/tree/master/Schemata. The package provides constructors for all structures (see constructors.go) These constructors ensure that the returned structure instantiation is valid with respect to the JSON schema and should be used in place of plain structure instantiation. The root structure is LogFile. The package provides utility decorators for the most commonly used structures (see decorators.go)
Package ataman is a colored terminal text rendering engine using ANSI sequences. The project aims on simple text attribute manipulations with templates. Here is the couple of examples to introduce the project. Customization of decoration styles can be done through `decorate.Style`, e.g. Templates follow the simple rules. Decoration styles use the follows dictionary. Some template examples with curly decoration style.
A simple package for adding colourful comments onto source code lines primarily for the use of user-friendly error messages.
go-ld-prime is a series of go interfaces for manipulating LD data. See https://gitlab.dms3.io/ld/specs for more information about the basics of "What is LD?". See https://gitlab.dms3.io/ld/go-ld-prime/tree/master/doc/README.md for more documentation about go-ld-prime's architecture and usage. Here in the godoc, the first couple of types to look at should be: These types provide a generic description of the data model. A Node is a piece of LD data which can be inspected. A NodeAssembler is used to create Nodes. (A NodeBuilder is just like a NodeAssembler, but allocates memory (whereas a NodeAssembler just fills up memory; using these carefully allows construction of very efficient code.) Different NodePrototypes can be used to describe Nodes which follow certain logical rules (e.g., we use these as part of implementing Schemas), and can also be used so that programs can use different memory layouts for different data (which can be useful for constructing efficient programs when data has known shape for which we can use specific or compacted memory layouts). If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are: The most typical use of LinkSystem is to use the linking/cid package to get a LinkSystem that works with CIDs: ... and then assign the StorageWriteOpener and StorageReadOpener fields in order to control where data is stored to and read from. Methods on the LinkSystem then provide the functions typically used to get data in and out of Nodes so you can work with it. This root package only provides the essential interfaces, as well as a Path implementation, and a variety of error types. Most actual functionality is found in subpackages. Particularly interesting subpackages include: Note that since interfaces in this package are the core of the library, choices made here maximize correctness and performance -- these choices are *not* always the choices that would maximize ergonomics. (Ergonomics can come on top; performance generally can't.) You can check out the 'must' or 'fluent' packages for more ergonomics; 'traversal' provides some ergnomics features for certain uses; any use of schemas with codegen tooling will provide more ergnomic options; or you can make your own function decorators that do what *you* need. Example_createDataAndMarshal shows how you can feed data into a NodeBuilder, and also how to then hand that to an Encoder. Often you'll encoding implicitly through a LinkSystem.Store call instead, but you can do it directly, too. Example_unmarshalData shows how you can use a Decoder and a NodeBuilder (or NodePrototype) together to do unmarshalling. Often you'll do this implicitly through a LinkSystem.Load call instead, but you can do it directly, too.
Package ataman is a colored terminal text rendering engine using ANSI sequences. The project aims on simple text attribute manipulations with templates. Here is the couple of examples to introduce the project. Customization of decoration styles can be done through `decorate.Style`, e.g. Templates follow the simple rules. Decoration styles use the follows dictionary. Some template examples with curly decoration style.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package xgbutil is a utility library designed to make common tasks with the X server easier. The central design choice that has driven development is to hide the complexity of X wherever possible but expose it when necessary. For example, the xevent package provides an implementation of an X event loop that acts as a dispatcher to event handlers set up with the xevent, keybind and mousebind packages. At the same time, the event queue is exposed and can be modified using xevent.Peek and xevent.DequeueAt. The xgbutil package is considerably small, and only contains some type definitions and the initial setup for an X connection. Much of the functionality of xgbutil comes from its sub-packages. Each sub-package is appropriately documented. xgbutil is go-gettable: XGB is the main dependency, and is required for all packages inside xgbutil. graphics-go and freetype-go are also required if using the xgraphics package. A quick example to demonstrate that xgbutil is working correctly: The output will be a list of names of all top-level windows and their geometry including window manager decorations. (Assuming your window manager supports some basic EWMH properties.) The examples directory contains a sizable number of examples demonstrating common tasks with X. They are intended to demonstrate a single thing each, although a few that require setup are necessarily long. Each example is heavily documented. The examples directory should be your first stop when learning how to use xgbutil. xgbutil is also used heavily throughout my (BurntSushi) window manager, Wingo. It may be useful reference material. Wingo project page: https://github.com/BurntSushi/wingo While I am (BurntSushi) fairly confident that XGB is thread safe, I am only somewhat confident that xgbutil is thread safe. It simply has not been tested enough for my confidence to be higher. Note that the xevent package's X event loop is not concurrent. Namely, designing a generally concurrent X event loop is extremely complex. Instead, the onus is on you, the user, to design concurrent callback functions if concurrency is desired.
Package errc simplifies error and defer handling. Package errc is a burner package: a proof-of-concept to explore better semantics for error and defer handling. Error handling and deferring using this package looks like: Checking for a nil error is replaced by a call to Must on an error catcher. A defer statement is similarly replaced by a call to Defer. Error handling in Go can also be tricky to get right. For instance, how to use defer may depend on the situation. For a Close method that frees up resources a simple use of defer suffices. For a CloseWithError method where a nil error indicates the successful completion of a transaction, however, it should be ensured that a nil error is not passed inadvertently, for instance when there is a panic if a server runs out of memory and is killed by a cluster manager. For instance, a correct way to commit a file to Google Cloud Storage is: The err variable is initialized to errPanicking to ensure a non-nil err is passed to CloseWithError when a panic occurs. This ensures that a panic will not cause a corrupted file. If all went well, a separate path used to collect the error returned by Close. Returning the error from Close is important to signal retry logic the file was not successfully written. Once the Close of w is successful all further errors are irrelevant. The error of the first Close is therefor willfully ignored. These are a lot of subtleties to get the error handling working properly! The same can be achieved using errc as follows: Observe how a straightforward application of idiomatic check-and-defer pattern leads to the correct results. The error of the first Close is now ignored explicitly using the Discard error handler, making it clear that this is what the programmer intended. Error handlers can be used to decorate errors, log them, or do anything else you usually do with errors. Suppose we want to use github.com/pkg/errors to decorate errors. A simple handler can be defined as: This handler can then be used as follows: