Package funcache inspired by https://docs.python.org/3/library/functools.html powered LRU cache, LFU cache ARC cache and decorator pattern
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
The package decorate is used to highlight code by passing it though an input filter and output filter. Both filters can be configured. for example The intermediate format is not visible to the user and only used as a protocol between the input and output filter.
Package dst declares the types used to represent decorated syntax trees for Go packages.
Package hooks provides several useful Logrus hooks These hooks are used as decorators of other hooks and provide enhanced functionality:
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
go-ipld-prime is a series of go interfaces for manipulating IPLD data. See https://github.com/ipld/specs for more information about the basics of "What is IPLD?". See https://github.com/ipld/go-ipld-prime/tree/master/doc/README.md for more documentation about go-ipld-prime's architecture and usage. Here in the godoc, the first couple of types to look at should be: These types provide a generic description of the data model. If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are: All of these types are interfaces. There are several implementations you can choose; we've provided some in subpackages, or you can bring your own. Particularly interesting subpackages include: Note that since interfaces in this package are the core of the library, choices made here maximize correctness and performance -- these choices are *not* always the choices that would maximize ergonomics. (Ergonomics can come on top; performance generally can't.) You can check out the 'must' or 'fluent' packages for more ergonomics; 'traversal' provides some ergnomics features for certain uses; any use of schemas with codegen tooling will provide more ergnomic options; or you can make your own function decorators that do what *you* need.
Package uitable provides a decorator for formating data as a table
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package cache is a caching library which allows functionality to be decorated onto basic caches
The render package helps manage HTTP request and response payloads. Every well-designed, robust and maintainable Web Service / REST API also needs well-defined request and payloads. Together with the endpoint handler, the request and response payloads make up the contract between your server and the clients calling on it. Typically in a REST API application, you will have data models (objects/structs) that hold lower-level runtime application state, and at time you need to assemble, decorate hide or transform the representation before responding to a client. That server output (response payload) structure, is likely the input structure to another handler on the server. This is where render comes in - offering a few simple helpers and to provide a simple pattern for managing payload encoding and decoding.
rowm is a hybrid tiling manager aiming to replicate the ergonomics of terminator and tmux. For more information on the usage, please read the README instead. main.go - entrypoint, initializes connection to X and connects root callbacks root/ - callbacks for the root window are defined here frame/ - the majority of the logic is in this folder, defines the tree window structure and wrapping decorations sideloop/ - utilities for running code in line with the X event loop from xgbutil resources/ - non compiled data resources cmd/rowmbright/ - a utility for changing backlight brightness without root privileges ext/ - misc bits and bobs missing in either the standard library or xgbutil dev.sh - starts up a container for developing in (required for running compile.sh or test.sh) compile.sh - go gets/builds rowm in container clean.sh - gets rid of container created by dev.sh exec.sh - execs into container created by dev.sh install.sh - (requires root) builds rowm and installs files to the system (only tested on ubuntu18.04) test.sh - (requires xephyr to be installed and dev.sh to have been run) opens a nested x server with rowm loaded for trying out
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package degorator implements the decorator pattern in golang. This can be used to add behavior, such as logs or metrics, into a function without affecting the original behavior at runtime.
Package errd simplifies error and defer handling. Package errd allows returning form a block of code without using the usual if clauses, while properly intercepting errors and passing them to code called at defer time. The following piece of idiomatic Go writes the contents of a reader to a file on Google Cloud Storage: Google Cloud Storage allows files to be written atomically. This code minimizes the chance of writing a bad file by aborting the write, using CloseWithError, whenever any anomaly is encountered. This includes a panic that could occur in the reader. Package errd aims to reduce bugs resulting from such subtleties by making the default of having very strict error checking easy. The following code achieves the same as the above: Discard is an example of an error handler. Here it signals that we want to ignore the error of the first Close. In all of the code above we made the common faux pas of passing errors on without decorating them. Package errd defines a Handler type to simplify the task of decorating. Suppose we want to use github.com/pkg/errors to decorate errors. A simple handler can be defined as: This handler can then be used as follows: The storage package used in this example defines the errors that are typically a result of user error. It would be possible to write a more generic storage writer that will add additional clarification when possible. Using such a handler as a default handler would look like: Setting up a global config with a default handler and using that everywhere makes it easy to enforce decorating errors. Error handlers can also be used to pass up HTTP error codes, log errors, attach metrics, etc. A function that is passed to Run does have any return values, not even an error. Users are supposed to set return values in the outer scope, for example by using named return variables. If name is only set at the end, any error will force an early return from Run and leave name empty. Otherwise, name will be set and err will be nil.
Package uitable provides a decorator for formating data as a table
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package decor contains a simple-to-user yet powerful template loader mainly focussed on the golang built-in templates text/template and html/template. While these packages provide powerful templating facilities with great support for rendering textual content, using these packages to build a complete layout-based template solution involves a lot of boilerplate code. decor tries to fill this gap by providing just what's needed to reduce the boilerplate and enable a great developer experience.
Package uitable provides a decorator for formating data as a table
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package xgbutil is a utility library designed to make common tasks with the X server easier. The central design choice that has driven development is to hide the complexity of X wherever possible but expose it when necessary. For example, the xevent package provides an implementation of an X event loop that acts as a dispatcher to event handlers set up with the xevent, keybind and mousebind packages. At the same time, the event queue is exposed and can be modified using xevent.Peek and xevent.DequeueAt. The xgbutil package is considerably small, and only contains some type definitions and the initial setup for an X connection. Much of the functionality of xgbutil comes from its sub-packages. Each sub-package is appropriately documented. xgbutil is go-gettable: XGB is the main dependency, and is required for all packages inside xgbutil. graphics-go and freetype-go are also required if using the xgraphics package. A quick example to demonstrate that xgbutil is working correctly: The output will be a list of names of all top-level windows and their geometry including window manager decorations. (Assuming your window manager supports some basic EWMH properties.) The examples directory contains a sizable number of examples demonstrating common tasks with X. They are intended to demonstrate a single thing each, although a few that require setup are necessarily long. Each example is heavily documented. The examples directory should be your first stop when learning how to use xgbutil. xgbutil is also used heavily throughout my window manager, Wingo. It may be useful reference material. Wingo project page: https://github.com/eankeen/wingo While I am fairly confident that XGB is thread safe, I am only somewhat confident that xgbutil is thread safe. It simply has not been tested enough for my confidence to be higher. Note that the xevent package's X event loop is not concurrent. Namely, designing a generally concurrent X event loop is extremely complex. Instead, the onus is on you, the user, to design concurrent callback functions if concurrency is desired.
Package wayland is the root of the wayland sample repository. Simple shm demo, draws into shared memory, does not rely on the window package. Smoke demo. Reacts on mouse input, uses the window package. ImageViewer demo. Displays image file. Does not use window package, draws it's own decorations. Draws fonts in the titlebar, for this it needs the Deja Vu font fonts-dejavu. Editor demo. Currently does not work. Provides basic OS functions like the creation of anonymous temporary file (CreateAnonymousFile), Mmap, Munmap, and Socket communication. Utility functions found in the wayland-client, provided for convenience. Wrapper around the C library libxkbcommon. Used inside the window package. Needs libxkbcommon-dev for compilation and recommends libx11-data for run time operation. Implements a window model on top of wayland. Aims to be a lot like the original window.c code. Uses wl. Loads an X cursors, requires to have a cursor theme installed, for instance dmz-cursor-theme. Like cairo but not cairo. Does not depend on anything. External contains an error checking code and the swizzle function for multiple architectures. No dependencies. Unstable wayland protocols. Depends on wl. The wayland itself, does not require any external deps (except for wayland server during runtime). Stable xdg protocol. Depends on wl.
Package ataman is a colored terminal text rendering engine using ANSI sequences. The project aims on simple text attribute manipulations with templates. Here is the couple of examples to introduce the project. Customization of decoration styles can be done through `decorate.Style`, e.g. Templates follow the simple rules. Decoration styles use the follows dictionary. Some template examples with curly decoration style.
Package irc provides an IRC client implementation. This overview provides brief introductions for types and concepts. The godoc for each type contains expanded documentation. Jump to the package examples to see what writing client code looks like with this package. The README file links to available extension packages for features such as state tracking, flood protection, and more. These are the main interfaces and structs that you will interact with while using this package: The Client type provides a simple abstraction around an IRC connection. It manages reading and writing messages to the IRC connection and calls your handler for each message it parses. It also deals with protocol concerns like ping timeouts. This interface enables the development of handler packages. Such packages may implement protocols such as IRCv3 Message IDs, or common bot concerns like preferred/alternate nickname for disconnects, flood protection, and channel state. Because the Handler interface for the irc package mimics the signature of the http.Handler interface, most patterns for http middleware can also be applied to irc handlers. Search results for phrases like "golang http middleware pattern", "adapter patterns", and "decorator patterns" should all return concepts from tutorials and blog posts that can be applied to this interface. The MessageWriter interface accepts any type that knows how to marshal itself into a line of IRC-encoded text. Most of the time it makes sense to send a Message struct, either by using the NewMessage function or any of the related constructors such as irc.Msg, irc.Notice, irc.Describe, etc. However, it can also be very simple to implement yourself: The named Message constructors (irc.Msg, irc.Notice, etc.) should generally be preferred because they explicitly list the available parameters for each command. This provides type safety, ordering safety, and most IDEs will provide intellisense suggestions and documentation for each parameter. In other words: The Router type is an implementation of Handler. It provides a convenient way to route incoming messages to specific handler functions by matching against message attributes like the command, source, target channel, and much more. It also provides an easy way to apply middleware, either globally or to specific routes. You are not required to use it, however. You can just as easily write your own message handler. It performs a role comparable to http.ServeMux, though it is not really a multiplexer. Middleware are just handlers. The term "middleware" applies to handlers which follow a pattern of accepting a handler as one of their arguments and returning a handler. Middleware can intercept outgoing messages by decorating the MessageWriter, as well as call the next handler with a modified *Message. These two abilities allow well-written packages to provide middleware that extend a client with nearly any IRC capability. Because the ordering of received messages is important for calculating various client states, it is generally not safe for middleware handlers to operate concurrently unless they can maintain message ordering. To bring it all together, this is the general sequence of events when running a client: Each Message parsed from the stream will result in a call to the client's handler, which is given a MessageWriter and reference to the parsed Message struct. Assuming that you use the package's Router type as your handler, this is what that sequence looks like: Any of these actions could occur at any point in the chain: This package does not implement message formatting. That is to say, there are no irc.Msgf or related functions. Formatting requirements vary widely by application. Some applications will want to extend the formatting rules with their own replacement sequences to include IRC color formatting in replies. Rather than implement nonstandard rules here (and force users to look up replacements), the canonical way to write formatted replies in the style of fmt.Printf is to write your own reply helper functions. For example: Hello, #World: The following code connects to an IRC server, waits for RPL_WELCOME, then requests to join a channel called #world, waits for the server to tell us that we've joined, then sends the message "Hello!" to #world, then disconnects with the message "Goodbye.". This example uses the message router to perform more complicated message matching with an event callback style. Connects to an IRC server, joins a channel called "#world", sends the message "Hello!", then quits when CTRL+C is pressed. The simplest possible implementation of a Message handler. In this case, "simple" means it is not using package features. The code should be considered to be a "messy" implementation, but demonstrates how easy it is to get down to the protocol level, if desired.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
One Tool to rule them all, One Tool to CI them, One Tool to test them all and in the darkness +1 them. Gandalf is designed to provide a language and stack agnostic HTTP API contract testing suite and prototyping toolchain. This is achieved by; running an HTTP API (aka provider), connecting to it as a real client (aka consumer) of the provider, asserting that it matches various rules (aka contracts). Optionally, once a contract is written you can then generate an approximation of the API (this happens just before the contract is tested) in the form of a mock. This allows for rapid prototyping and/or parallel development of the real consumer and provider implementations. Gandalf has no allegiance to any specific paradigms, technologies, or concepts and should bend to fit real world use cases as opposed to vice versa. This means if Gandalf does something one way today it does not mean that tomorrow it could not support a different way provided someone has a use for it. While Gandalf does use golang and the go test framework, it is not specific to go as at its core it just makes HTTP requests and checks the responses. Your web server or clients can be written in any language/framework. The official documentation also uses JSON and RESTful API's as examples but Gandalf supports any and all paradigms or styles of API. Most go programs are compiled down to a binary and executed, Gandalf is designed to be used as a library to write your own tests and decorate the test binary instead. For example, Gandalf does have several command line switches however they are provided to the `go test` command instead of some non existent `Gandalf` command. This allows Gandalf to get all kind of testing and benchmarking support for free while being a well known stable base to build upon. Contract testing can be a bit nebulous and also has various option prefixes such as Consumer Driven, Gandalf cares not for any prefixes (who writes contracts and where is up to you) nor does it care if you are testing the interface or your API or the business logic or some combination of both, no one will save you from blowing your own foot off if you choose to.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package dctx provides several utilities for working with golang.org/x/net/context in http requests. Primarily, the focus is on logging relevent request information but this package is not limited to that purpose. The easiest way to get started is to get the background context: The returned context should be passed around your application and be the root of all other context instances. If the application has a version, this line should be called before anything else: The above will store the version in the context and will be available to the logger. The most useful aspect of this package is GetLogger. This function takes any context.Context interface and returns the current logger from the context. Canonical usage looks like this: GetLogger also takes optional key arguments. The keys will be looked up in the context and reported with the logger. The following example would return a logger that prints the version with each log message: The above would print out a log message like this: When used with WithLogger, we gain the ability to decorate the context with loggers that have information from disparate parts of the call stack. Following from the version example, we can build a new context with the configured logger such that we always print the version field: Since the logger has been pushed to the context, we can now get the version field for free with our log messages. Future calls to GetLogger on the new context will have the version field: This becomes more powerful when we start stacking loggers. Let's say we have the version logger from above but also want a request id. Using the context above, in our request scoped function, we place another logger in the context: When GetLogger is called on the new context, "http.request.id" will be included as a logger field, along with the original "version" field: Note that this only affects the new context, the previous context, with the version field, can be used independently. Put another way, the new logger, added to the request context, is unique to that context and can have request scoped varaibles. This package also contains several methods for working with http requests. The concepts are very similar to those described above. We simply place the request in the context using WithRequest. This makes the request variables available. GetRequestLogger can then be called to get request specific variables in a log line: Like above, if we want to include the request data in all log messages in the context, we push the logger to a new context and use that one: The concept is fairly powerful and ensures that calls throughout the stack can be traced in log messages. Using the fields like "http.request.id", one can analyze call flow for a particular request with a simple grep of the logs.
Package context provides several utilities for working with golang.org/x/net/context in http requests. Primarily, the focus is on logging relevant request information but this package is not limited to that purpose. The easiest way to get started is to get the background context: The returned context should be passed around your application and be the root of all other context instances. If the application has a version, this line should be called before anything else: The above will store the version in the context and will be available to the logger. The most useful aspect of this package is GetLogger. This function takes any context.Context interface and returns the current logger from the context. Canonical usage looks like this: GetLogger also takes optional key arguments. The keys will be looked up in the context and reported with the logger. The following example would return a logger that prints the version with each log message: The above would print out a log message like this: When used with WithLogger, we gain the ability to decorate the context with loggers that have information from disparate parts of the call stack. Following from the version example, we can build a new context with the configured logger such that we always print the version field: Since the logger has been pushed to the context, we can now get the version field for free with our log messages. Future calls to GetLogger on the new context will have the version field: This becomes more powerful when we start stacking loggers. Let's say we have the version logger from above but also want a request id. Using the context above, in our request scoped function, we place another logger in the context: When GetLogger is called on the new context, "http.request.id" will be included as a logger field, along with the original "version" field: Note that this only affects the new context, the previous context, with the version field, can be used independently. Put another way, the new logger, added to the request context, is unique to that context and can have request scoped varaibles. This package also contains several methods for working with http requests. The concepts are very similar to those described above. We simply place the request in the context using WithRequest. This makes the request variables available. GetRequestLogger can then be called to get request specific variables in a log line: Like above, if we want to include the request data in all log messages in the context, we push the logger to a new context and use that one: The concept is fairly powerful and ensures that calls throughout the stack can be traced in log messages. Using the fields like "http.request.id", one can analyze call flow for a particular request with a simple grep of the logs.
Package context provides several utilities for working with golang.org/x/net/context in http requests. Primarily, the focus is on logging relevant request information but this package is not limited to that purpose. The easiest way to get started is to get the background context: The returned context should be passed around your application and be the root of all other context instances. If the application has a version, this line should be called before anything else: The above will store the version in the context and will be available to the logger. The most useful aspect of this package is GetLogger. This function takes any context.Context interface and returns the current logger from the context. Canonical usage looks like this: GetLogger also takes optional key arguments. The keys will be looked up in the context and reported with the logger. The following example would return a logger that prints the version with each log message: The above would print out a log message like this: When used with WithLogger, we gain the ability to decorate the context with loggers that have information from disparate parts of the call stack. Following from the version example, we can build a new context with the configured logger such that we always print the version field: Since the logger has been pushed to the context, we can now get the version field for free with our log messages. Future calls to GetLogger on the new context will have the version field: This becomes more powerful when we start stacking loggers. Let's say we have the version logger from above but also want a request id. Using the context above, in our request scoped function, we place another logger in the context: When GetLogger is called on the new context, "http.request.id" will be included as a logger field, along with the original "version" field: Note that this only affects the new context, the previous context, with the version field, can be used independently. Put another way, the new logger, added to the request context, is unique to that context and can have request scoped varaibles. This package also contains several methods for working with http requests. The concepts are very similar to those described above. We simply place the request in the context using WithRequest. This makes the request variables available. GetRequestLogger can then be called to get request specific variables in a log line: Like above, if we want to include the request data in all log messages in the context, we push the logger to a new context and use that one: The concept is fairly powerful and ensures that calls throughout the stack can be traced in log messages. Using the fields like "http.request.id", one can analyze call flow for a particular request with a simple grep of the logs.
"jst" -- JSON Table -- is a format that's parsable as JSON, while sprucing up the display to humans using the non-significant whitespace cleverly. Regular data can be piped into a JSON Table formatter, and some simple heuristics will attempt to detect table structure. Lists can be turned into column-aligned tables: Maps can also be turned into column-aligned tables: Decorations can be applied: The overall nature of detecting traits of the data (particularly, size) means JSON Tables cannot be created streamingly; we have to process the entire structure first, and only then can we begin to output correctly aligned data. (It's for this reason that this is implemented over IPLD Nodes, and would be somewhat painful to implement using just refmt Tokens -- we'd end up buffering *all* the tokens anyway, and wanting to build an index, etc.) There's no unmarshal functions because unmarshal is just... regular JSON unmarshal.
termdeco is a small library for cross-platform console text decoration. It provides functions for text decoration, Formatter, that can be used with fmt.Printf like (Fprintf, Sprintf etc.) functions like In this case, v is printed with red, bold text (on Windows, bold is translated into text brighter) on green background. It also provides wrappers for functions in fmt package. For cross-platform compatibility, it is recommended using these wrappers for printing decorated text like It implements following ANSI compatible decoration. Black, Red, Green, Yellow, Blue, Magenta, Cyan, White text and background colors. Bright Black, Bright Red, Bright Green, Bright Yellow, Bright Blue, Bright Magenta, Bright Cyan, Bright White text and background colors Those decoration is implemented as a function of its name whith returns Decoration type Formatter so it can be called in a chain like and latter called method overwrites previous one, for example, applies red and after that it overwrites so text printed as green
Package decorate contains various helpers to decorate errors with fewer lines of code in functions.