Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package log provides a structured logger. Structured logging produces logs easily consumed later by humans or machines. Humans might be interested in debugging errors, or tracing specific requests. Machines might be interested in counting interesting events, or aggregating information for off-line processing. In both cases, it is important that the log messages are structured and actionable. Package log is designed to encourage both of these best practices. The fundamental interface is Logger. Loggers create log events from key/value data. The Logger interface has a single method, Log, which accepts a sequence of alternating key/value pairs, which this package names keyvals. Here is an example of a function using a Logger to create log events. The keys in the above example are "taskID" and "event". The values are task.ID, "starting task", and "task complete". Every key is followed immediately by its value. Keys are usually plain strings. Values may be any type that has a sensible encoding in the chosen log format. With structured logging it is a good idea to log simple values without formatting them. This practice allows the chosen logger to encode values in the most appropriate way. A contextual logger stores keyvals that it includes in all log events. Building appropriate contextual loggers reduces repetition and aids consistency in the resulting log output. With, WithPrefix, and WithSuffix add context to a logger. We can use With to improve the RunTask example. The improved version emits the same log events as the original for the first and last calls to Log. Passing the contextual logger to taskHelper enables each log event created by taskHelper to include the task.ID even though taskHelper does not have access to that value. Using contextual loggers this way simplifies producing log output that enables tracing the life cycle of individual tasks. (See the Contextual example for the full code of the above snippet.) A Valuer function stored in a contextual logger generates a new value each time an event is logged. The Valuer example demonstrates how this feature works. Valuers provide the basis for consistently logging timestamps and source code location. The log package defines several valuers for that purpose. See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and DefaultCaller. A common logger initialization sequence that ensures all log entries contain a timestamp and source location looks like this: Applications with multiple goroutines want each log event written to the same logger to remain separate from other log events. Package log provides two simple solutions for concurrent safe logging. NewSyncWriter wraps an io.Writer and serializes each call to its Write method. Using a SyncWriter has the benefit that the smallest practical portion of the logging logic is performed within a mutex, but it requires the formatting Logger to make only one call to Write per log event. NewSyncLogger wraps any Logger and serializes each call to its Log method. Using a SyncLogger has the benefit that it guarantees each log event is handled atomically within the wrapped logger, but it typically serializes both the formatting and output logic. Use a SyncLogger if the formatting logger may perform multiple writes per log event. This package relies on the practice of wrapping or decorating loggers with other loggers to provide composable pieces of functionality. It also means that Logger.Log must return an error because some implementations—especially those that output log data to an io.Writer—may encounter errors that cannot be handled locally. This in turn means that Loggers that wrap other loggers should return errors from the wrapped logger up the stack. Fortunately, the decorator pattern also provides a way to avoid the necessity to check for errors every time an application calls Logger.Log. An application required to panic whenever its Logger encounters an error could initialize its logger as follows.
Package goexec is a fluent decorator based API for os/exec
NoListFileSystem is a custom filesystem implementation. It follows the decorator pattern and wraps around a "base" file system. It is mainly used with http.FileServer so that 404 status code is returned instead of a directory listing.
Package style provides functions that decorate text (like formatting, changing case or colors) using several idioms (plain text, (non-)colored term, mandoc, markdown). Proposed styles are supposed to be easily extendable.
Package checkpoint provides a way to decorate errors by some additional caller information which results in something similar to a stacktrace. Each error added to a Checkpoint can be checked by errors.Is and retrieved by errors.As.
Package dst declares the types used to represent decorated syntax trees for Go packages.
Package xgbutil is a utility library designed to make common tasks with the X server easier. The central design choice that has driven development is to hide the complexity of X wherever possible but expose it when necessary. For example, the xevent package provides an implementation of an X event loop that acts as a dispatcher to event handlers set up with the xevent, keybind and mousebind packages. At the same time, the event queue is exposed and can be modified using xevent.Peek and xevent.DequeueAt. The xgbutil package is considerably small, and only contains some type definitions and the initial setup for an X connection. Much of the functionality of xgbutil comes from its sub-packages. Each sub-package is appropriately documented. xgbutil is go-gettable: XGB is the main dependency, and is required for all packages inside xgbutil. graphics-go and freetype-go are also required if using the xgraphics package. A quick example to demonstrate that xgbutil is working correctly: The output will be a list of names of all top-level windows and their geometry including window manager decorations. (Assuming your window manager supports some basic EWMH properties.) The examples directory contains a sizable number of examples demonstrating common tasks with X. They are intended to demonstrate a single thing each, although a few that require setup are necessarily long. Each example is heavily documented. The examples directory should be your first stop when learning how to use xgbutil. xgbutil is also used heavily throughout my (BurntSushi) window manager, Wingo. It may be useful reference material. Wingo project page: https://github.com/BurntSushi/wingo While I am (BurntSushi) fairly confident that XGB is thread safe, I am only somewhat confident that xgbutil is thread safe. It simply has not been tested enough for my confidence to be higher. Note that the xevent package's X event loop is not concurrent. Namely, designing a generally concurrent X event loop is extremely complex. Instead, the onus is on you, the user, to design concurrent callback functions if concurrency is desired.
Package goexec is a fluent decorator based API for os/exec
Package gologger defines an interface and logger based on logrus(https://github.com/sirupsen/logrus) library, implementing out-of-the-box features such as: - singleton logic - debuging option (`WithDebugLevel(true)`) - decorated output (`WithRunTimeContext()`) - nologger option (`WithNullLogger()`) - option to write loggin output to file - convenience methods to create new logger instances (i.e. `(l) func NewLoggerWithField()`) - option to set custom log level package log implements a custom logger based on logrus
Package service is a gRPC service. It implements all of the service methods defined in the proto file. Authentication will happen on the per method level, still deciding on a pattern but using interceptors and by decorating the methods I should be able to have a decent permissions framework.
Package otelmiddleware provides middleware for wrapping http.Server handlers with Open Telemetry tracing support. The trace.Span is decorated with standard metadata extracted from the http.Request injected into the middleware. the basic information is extracted using the OpenTelemetry semconv package. When a span gets initialized it uses the following slice of trace.SpanStartOption The slice can be extended using the WithAttributes TraceOption function. After these options are applied a new span is created and the middleware will pass the http.ResponseWriter and http.Request to the next http.Handler. Functions Types Structs
Package otelslog provides a function to extend structured logs using slog with the Open Telemetry trace related context. Currently, slog is offered through golang.org/x/exp/slog slog.Logger is decorated with standard metadata extracted from the trace.SpanContext, a traceID, spanID and additional information is injected into a log. The initialization uses file level variable configuration to set defaults for the functions to use. SetLogOptions can overwrite the defaults. When the configuration is done AddTracingContext and AddTracingContextWithAttributes decorate slog logs with data from the trace context. To add trace context data to logging, the context can be passed by using slog.LogAttrs(nil, slog.LevelInfo, "this is a log", otelslog.AddTracingContext(span)...) for example. The use of slog.LogAttrs is advised due to AddTracingContext and AddTracingContextWithAttributes returning []slog.Attr which slog.LogAttrs accepts as a type. Other functions accept ...any which in my tests resulted in !BADKEY entries. Next to using native slog, this package also offers a Logger which extends the slog.Logger with its own functions to simplify working with slog.Logger's. The Logger can be used as follows: Functions Types Structs import "github.com/vincentfree/opentelemetry/otelslog"
Package otelmiddleware provides middleware for wrapping http.Server handlers with Open Telemetry tracing support. The trace.Span is decorated with standard metadata extracted from the http.Request injected into the middleware. the basic information is extracted using the OpenTelemetry semconv package. When a span gets initialized, it uses the following slice of trace.SpanStartOption The slice can be extended using the WithAttributes TraceOption function. After these options are applied, a new span is created and the middleware will pass the http.ResponseWriter and http.Request to the next http.Handler. Functions Types Structs
Package otellogrus provides a function to extend structured logs using logrus with the Open Telemetry trace related context. The github.com/sirupsen/logrus logrus logs are decorated with standard metadata extracted from the trace.SpanContext, a traceID, spanID and additional information is injected into a log. The initialization uses file level configuration to set defaults for the function to use. SetLogOptions can overwrite the defaults. When the configuration is done AddTracingContext and AddTracingContextWithAttributes decorate logrus logs with data from the trace context. Adding trace context ata to logs can be achieved by using logrus.WithFields(AddTracingContext(span)).Info("test") for example. Functions Types Structs import "github.com/vincentfree/opentelemetry/otellogrus"
Package otelzerolog provides a function to extend structured logs using zerolog with the Open Telemetry trace related context. The github.com/rs/zerolog zerolog.Event is decorated with standard metadata extracted from the trace.SpanContext, a traceID, spanID and additional information is injected into a log. The initialization uses file level configuration to set defaults for the function to use. SetLogOptions can overwrite the defaults. When the configuration is done AddTracingContext and AddTracingContextWithAttributes decorate zerolog logs with data from the trace context. A zeroLog.Event can be passed by using log.Info().Func(AddTracingContext(span)).Msg("") for example. Functions Types Structs import "github.com/vincentfree/opentelemetry/otelzerolog"
Package log provides a structured logger. Structured logging produces logs easily consumed later by humans or machines. Humans might be interested in debugging errors, or tracing specific requests. Machines might be interested in counting interesting events, or aggregating information for off-line processing. In both cases, it is important that the log messages are structured and actionable. Package log is designed to encourage both of these best practices. The fundamental interface is Logger. Loggers create log events from key/value data. The Logger interface has a single method, Log, which accepts a sequence of alternating key/value pairs, which this package names keyvals. Here is an example of a function using a Logger to create log events. The keys in the above example are "taskID" and "event". The values are task.ID, "starting task", and "task complete". Every key is followed immediately by its value. Keys are usually plain strings. Values may be any type that has a sensible encoding in the chosen log format. With structured logging it is a good idea to log simple values without formatting them. This practice allows the chosen logger to encode values in the most appropriate way. A contextual logger stores keyvals that it includes in all log events. Building appropriate contextual loggers reduces repetition and aids consistency in the resulting log output. With and WithPrefix add context to a logger. We can use With to improve the RunTask example. The improved version emits the same log events as the original for the first and last calls to Log. Passing the contextual logger to taskHelper enables each log event created by taskHelper to include the task.ID even though taskHelper does not have access to that value. Using contextual loggers this way simplifies producing log output that enables tracing the life cycle of individual tasks. (See the Contextual example for the full code of the above snippet.) A Valuer function stored in a contextual logger generates a new value each time an event is logged. The Valuer example demonstrates how this feature works. Valuers provide the basis for consistently logging timestamps and source code location. The log package defines several valuers for that purpose. See Timestamp, DefaultTimestamp, DefaultTimestampUTC, Caller, and DefaultCaller. A common logger initialization sequence that ensures all log entries contain a timestamp and source location looks like this: Applications with multiple goroutines want each log event written to the same logger to remain separate from other log events. Package log provides two simple solutions for concurrent safe logging. NewSyncWriter wraps an io.Writer and serializes each call to its Write method. Using a SyncWriter has the benefit that the smallest practical portion of the logging logic is performed within a mutex, but it requires the formatting Logger to make only one call to Write per log event. NewSyncLogger wraps any Logger and serializes each call to its Log method. Using a SyncLogger has the benefit that it guarantees each log event is handled atomically within the wrapped logger, but it typically serializes both the formatting and output logic. Use a SyncLogger if the formatting logger may perform multiple writes per log event. This package relies on the practice of wrapping or decorating loggers with other loggers to provide composable pieces of functionality. It also means that Logger.Log must return an error because some implementations—especially those that output log data to an io.Writer—may encounter errors that cannot be handled locally. This in turn means that Loggers that wrap other loggers should return errors from the wrapped logger up the stack. Fortunately, the decorator pattern also provides a way to avoid the necessity to check for errors every time an application calls Logger.Log. An application required to panic whenever its Logger encounters an error could initialize its logger as follows.
Lager makes logs that are easy for computers to parse, easy for people to read, and easy for programmers to generate. It also encourages logging data over messages, which tends to make logs more useful as well as easier to generate. You don't need to pass around a logging object so you can log information from any code. You can decorate a Go context.Context with additional data to be added to each log line written when that context applies. The logs are written in JSON format but the items in JSON are written in a controlled order, preserving the order used in the program code. This makes the logs pretty easy for humans to scan even with no processing or tooling. Typical logging code like: could output (especially when running interactively): (but as a single line). If you declare that the code is running inside Google Cloud Platform (GCP), it could instead output: (as a single line) which GCP understands well but note that it is still easy for a human to read with the consistent order used. You don't even need to take the time to compose and type labels for data items, if it doesn't seem worth it in some cases: There are 11 log levels and 9 can be independently enabled or disabled. You usually use them via code similar to: Panic and Exit cannot be disabled. Fail, Warn, Note, and Acc are enabled by default. If you want to decorate each log line with additional key/value pairs, then you can accumulate those in a context.Context value that gets passed around and then pass that Context in when logging: Most log archiving systems expect JSON log lines to be a map (object/hash) not a list (array). To get that you just declare what labels to use for: timestamp, level, message, list data, context, and module. Support for GCP Cloud Logging and Cloud Trace is integrated.
Package hooks provides several useful Logrus hooks These hooks are used as decorators of other hooks and provide enhanced functionality:
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
Package turtlefinder provides a Containerizer that auto-detects container engines and automatically creates workload watchers for them. It supports both “permanent” daemons as well as socket-activated “don't-call-them-daemons”. Additionally, it also detects the hierarchy of container engines, such as containerd-in-Docker and podman-in-Docker. The following container engines are supported: The following socket activators are supported: That's all that is necessary: Boringly simple, right? The turtlefinder containerizer.Containerizer is safe to be used in concurrent discoveries. A turtlefinder supports two different container engine discovery mechanisms: The turtlefinder then spins up background watchers as required that synchronize with the workload state of the detected container engines. Also, old engine watchers get retired as their engine processes die. This workload state information is then returned as the list of discovered containers, including the hierarchy of container engines, based on which engine is placed inside a container managed by another engine. Basically, upon a container query the turtlefinder containerizer first looks for any newly seen container engines, based on container engine process names. The engine discovery can be extended by pluging in new engine detectors (and adaptors). For “short-lived” container engine services that terminate themselves whenever they go idle, we unfortunately need a more involved discovery mechanism. More involved, as we don't want to slow down discovery by constantly looking for something that isn't even installed in the system, so we need to do some optimization. The general idea is to look for well-known socket activators, namely “systemd”. If found (even multiple times!), we scan such an activator for its listening unix domain sockets and determine their file system paths. If we find a matching path (rather, a matching suffix, such as “podman.sock”) we spin up a suitable background watcher. Of course, this background watcher will keep the container engine alive, but then we also need this service in constant monitoring. The difficult part here is to avoid repeated unnecessary costly socket activator discoveries. We thus keep some state information about a socket activator's socket-related setup and only rediscover upon noticing changes in its socket configuration (which rarely if ever occurs). A defining feature of the turtlefinder is that it additionally determines the hierarchy of container engines, such as when a container engine is hosted inside a container managed by a (parent) container engine. This hierarchy later gets propagated to the individual containers in form of a so-called “prefix”, attached in form of a special container label. Such engine-in-engine configurations are actually not so unknown: Finally, the decoration of the discovered containers uses the usual (extensible) lxkns github.com/thediveo/lxkns/decorator.Decorator mechanism as part of the overall discovery.
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Resiliência is a fault tolerance Go library, whose goal is to gather algorithms that implement resiliency patterns. This library provides some fault tolerance policies, which can be used singly to wrap a function or chain other policies together. For individual use of each policy, access the package referring to it to see its documentation. Below we will see how to use a decorator or a policy chain of responsibility. A decorator allows you to decorate a command with one or more policies. These are chained so that the call to the service can be made within a circuit breaker, fallback, retry or timeout. In the example below, the command will be called within the policies in the following order: timeout, retry, circuit breaker and fallback. A policy chain, unlike the decorator, allows you to determine the order in which policies will be chained together. In this case: retry, circuit breaker, timeout and fallback.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package htadaptor provides generic domain logic adaptors for HTTP handlers. Adaptors come in three flavors: Validation errors are decorated with the correct http.StatusUnprocessableEntity status code.