Package errorfmt provides a helper function to decorate errors with additional context.
Package assert, for writing checks in unit tests This package provides functions to reduce the amount of code needed to write simple assertions. It implements the best practice pattern where the output of a failure explains what the check "got" and what it wanted. The assert functions are defined such that writing requires less code but still are easy to understand. It works by decorating the standard testing.T type in your test and report (Fatal) the offending asserting call if a check fails. Example which will report Examples: (using the dot import) You can create and use your own checks by implementing the RelationalOperator.
Package ct provides ANSI terminal text coloring and decoration support.
Package gurl is a class of High Order Component which can do http requests with few interesting property such as composition and laziness. The library implements rough and naive Haskell's equivalent of do-notation, so called monadic binding form. This construction decorates http i/o pipeline(s) with "programmable commas". Microservices have become a design style to evolve system architecture in parallel, implement stable and consistent interfaces. An expressive language is required to design the variety of network communication use-cases. A pure functional languages fits very well to express communication behavior. The language gives a rich techniques to hide the networking complexity using monads as abstraction. The IO-monads helps us to compose a chain of network operations and represent them as pure computation, build a new things from small reusable elements. The library is implemented after Erlang's https://github.com/fogfish/m_http The library attempts to adapts a human-friendly syntax of HTTP request/response logging/definition used by curl with Behavior as a Code paradigm. It tries to connect cause-and-effect (Given/When/Then) with the networking (Input/Process/Output). This semantic provides an intuitive approach to specify HTTP requests/responses. Adoption of this syntax as Go native code provides a rich capability to network programming. ↣ cause-and-effect abstraction of HTTP request/response, naive do-notation ↣ high-order composition of individual HTTP requests to complex networking computations ↣ human-friendly, Go native and declarative syntax to depict HTTP operations ↣ implements a declarative approach for testing of RESTful interfaces ↣ automatically encodes/decodes Go native HTTP payload using Content-Type hints ↣ supports generic transformation to algebraic data types ↣ simplify error handling with naive Either implementation Standard Golang packages implements low-level HTTP interface, which requires knowledge about protocol itself, aspects of Golang implementation, a bit of boilerplate coding and lack of standardized chaining (composition) of individual requests. gurl library inherits an ability of pure functional languages to express communication behavior by hiding the networking complexity using category pattern (aka "do"-notation). This pattern helps us to compose a chain of network operations and represent them as pure computation, build a new things from small reusable elements. This library uses the "do"-notation, so called monadic binding form. It is well know in functional programming languages such as Haskell and Scala. The networking becomes a collection of composed "do"-notation in context of a state monad. A composition of HTTP primitives within the category are written with the following syntax. Here, each arrow is a morphism applied to HTTP protocol. The implementation defines an abstraction of the protocol environments and lenses to focus inside it. In other words, the category represents the environment as an "invisible" side-effect of the composition. `gurl.Join(arrows ...Arrow) Arrow` and its composition implements lazy I/O. It only returns a "promise", you have to evaluate it in the context of IO instance. The following code snippet demonstrates a typical usage scenario. The evaluation of "program" fails if either networking fails or expectations do not match actual response. There are no needs to check error code after each operation. The composition is smart enough to terminate "program" execution. See User Guide about the library at https://github.com/fogfish/gurl
One Tool to rule them all, One Tool to CI them, One Tool to test them all and in the darkness +1 them. Gandalf is designed to provide a language and stack agnostic HTTP API contract testing suite and prototyping toolchain. This is achieved by; running an HTTP API (aka provider), connecting to it as a real client (aka consumer) of the provider, asserting that it matches various rules (aka contracts). Optionally, once a contract is written you can then generate an approximation of the API (this happens just before the contract is tested) in the form of a mock. This allows for rapid prototyping and/or parallel development of the real consumer and provider implementations. Gandalf has no allegiance to any specific paradigms, technologies, or concepts and should bend to fit real world use cases as opposed to vice versa. This means if Gandalf does something one way today it does not mean that tomorrow it could not support a different way provided someone has a use for it. While Gandalf does use golang and the go test framework, it is not specific to go as at its core it just makes HTTP requests and checks the responses. Your web server or clients can be written in any language/framework. The official documentation also uses JSON and RESTful API's as examples but Gandalf supports any and all paradigms or styles of API. Most go programs are compiled down to a binary and executed, Gandalf is designed to be used as a library to write your own tests and decorate the test binary instead. For example, Gandalf does have several command line switches however they are provided to the `go test` command instead of some non existent `Gandalf` command. This allows Gandalf to get all kind of testing and benchmarking support for free while being a well known stable base to build upon. Contract testing can be a bit nebulous and also has various option prefixes such as Consumer Driven, Gandalf cares not for any prefixes (who writes contracts and where is up to you) nor does it care if you are testing the interface or your API or the business logic or some combination of both, no one will save you from blowing your own foot off if you choose to.
Package uitable provides a decorator for formating data as a table
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
One Tool to rule them all, One Tool to CI them, One Tool to test them all and in the darkness +1 them. Gandalf is designed to provide a language and stack agnostic HTTP API contract testing suite and prototyping toolchain. This is achieved by; running an HTTP API (aka provider), connecting to it as a real client (aka consumer) of the provider, asserting that it matches various rules (aka contracts). Optionally, once a contract is written you can then generate an approximation of the API (this happens just before the contract is tested) in the form of a mock. This allows for rapid prototyping and/or parallel development of the real consumer and provider implementations. Gandalf has no allegiance to any specific paradigms, technologies, or concepts and should bend to fit real world use cases as opposed to vice versa. This means if Gandalf does something one way today it does not mean that tomorrow it could not support a different way provided someone has a use for it. While Gandalf does use golang and the go test framework, it is not specific to go as at its core it just makes HTTP requests and checks the responses. Your web server or clients can be written in any language/framework. The official documentation also uses JSON and RESTful API's as examples but Gandalf supports any and all paradigms or styles of API. Most go programs are compiled down to a binary and executed, Gandalf is designed to be used as a library to write your own tests and decorate the test binary instead. For example, Gandalf does have several command line switches however they are provided to the `go test` command instead of some non existent `Gandalf` command. This allows Gandalf to get all kind of testing and benchmarking support for free while being a well known stable base to build upon. Contract testing can be a bit nebulous and also has various option prefixes such as Consumer Driven, Gandalf cares not for any prefixes (who writes contracts and where is up to you) nor does it care if you are testing the interface or your API or the business logic or some combination of both, no one will save you from blowing your own foot off if you choose to.
Package context provides several utilities for working with golang.org/x/net/context in http requests. Primarily, the focus is on logging relevant request information but this package is not limited to that purpose. The easiest way to get started is to get the background context: The returned context should be passed around your application and be the root of all other context instances. If the application has a version, this line should be called before anything else: The above will store the version in the context and will be available to the logger. The most useful aspect of this package is GetLogger. This function takes any context.Context interface and returns the current logger from the context. Canonical usage looks like this: GetLogger also takes optional key arguments. The keys will be looked up in the context and reported with the logger. The following example would return a logger that prints the version with each log message: The above would print out a log message like this: When used with WithLogger, we gain the ability to decorate the context with loggers that have information from disparate parts of the call stack. Following from the version example, we can build a new context with the configured logger such that we always print the version field: Since the logger has been pushed to the context, we can now get the version field for free with our log messages. Future calls to GetLogger on the new context will have the version field: This becomes more powerful when we start stacking loggers. Let's say we have the version logger from above but also want a request id. Using the context above, in our request scoped function, we place another logger in the context: When GetLogger is called on the new context, "http.request.id" will be included as a logger field, along with the original "version" field: Note that this only affects the new context, the previous context, with the version field, can be used independently. Put another way, the new logger, added to the request context, is unique to that context and can have request scoped varaibles. This package also contains several methods for working with http requests. The concepts are very similar to those described above. We simply place the request in the context using WithRequest. This makes the request variables available. GetRequestLogger can then be called to get request specific variables in a log line: Like above, if we want to include the request data in all log messages in the context, we push the logger to a new context and use that one: The concept is fairly powerful and ensures that calls throughout the stack can be traced in log messages. Using the fields like "http.request.id", one can analyze call flow for a particular request with a simple grep of the logs.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
The render package helps manage HTTP request and response payloads. Every well-designed, robust and maintainable Web Service / REST API also needs well-defined request and payloads. Together with the endpoint handler, the request and response payloads make up the contract between your server and the clients calling on it. Typically in a REST API application, you will have data models (objects/structs) that hold lower-level runtime application state, and at time you need to assemble, decorate hide or transform the representation before responding to a client. That server output (response payload) structure, is likely the input structure to another handler on the server. This is where render comes in - offering a few simple helpers and to provide a simple pattern for managing payload encoding and decoding.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package cache is a caching library which allows functionality to be decorated onto basic caches
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package hooks provides several useful Logrus hooks These hooks are used as decorators of other hooks and provide enhanced functionality:
Package ataman is a colored terminal text rendering engine using ANSI sequences. The project aims on simple text attribute manipulations with templates. Here is the couple of examples to introduce the project. Customization of decoration styles can be done through `decorate.Style`, e.g. Templates follow the simple rules. Decoration styles use the follows dictionary. Some template examples with curly decoration style.
Package irc provides an IRC client implementation. This overview provides brief introductions for types and concepts. The godoc for each type contains expanded documentation. Jump to the package examples to see what writing client code looks like with this package. The README file links to available extension packages for features such as state tracking, flood protection, and more. These are the main interfaces and structs that you will interact with while using this package: The Client type provides a simple abstraction around an IRC connection. It manages reading and writing messages to the IRC connection and calls your handler for each message it parses. It also deals with protocol concerns like ping timeouts. This interface enables the development of handler packages. Such packages may implement protocols such as IRCv3 Message IDs, or common bot concerns like preferred/alternate nickname for disconnects, flood protection, and channel state. Because the Handler interface for the irc package mimics the signature of the http.Handler interface, most patterns for http middleware can also be applied to irc handlers. Search results for phrases like "golang http middleware pattern", "adapter patterns", and "decorator patterns" should all return concepts from tutorials and blog posts that can be applied to this interface. The MessageWriter interface accepts any type that knows how to marshal itself into a line of IRC-encoded text. Most of the time it makes sense to send a Message struct, either by using the NewMessage function or any of the related constructors such as irc.Msg, irc.Notice, irc.Describe, etc. However, it can also be very simple to implement yourself: The named Message constructors (irc.Msg, irc.Notice, etc.) should generally be preferred because they explicitly list the available parameters for each command. This provides type safety, ordering safety, and most IDEs will provide intellisense suggestions and documentation for each parameter. In other words: The Router type is an implementation of Handler. It provides a convenient way to route incoming messages to specific handler functions by matching against message attributes like the command, source, target channel, and much more. It also provides an easy way to apply middleware, either globally or to specific routes. You are not required to use it, however. You can just as easily write your own message handler. It performs a role comparable to http.ServeMux, though it is not really a multiplexer. Middleware are just handlers. The term "middleware" applies to handlers which follow a pattern of accepting a handler as one of their arguments and returning a handler. Middleware can intercept outgoing messages by decorating the MessageWriter, as well as call the next handler with a modified *Message. These two abilities allow well-written packages to provide middleware that extend a client with nearly any IRC capability. Because the ordering of received messages is important for calculating various client states, it is generally not safe for middleware handlers to operate concurrently unless they can maintain message ordering. To bring it all together, this is the general sequence of events when running a client: Each Message parsed from the stream will result in a call to the client's handler, which is given a MessageWriter and reference to the parsed Message struct. Assuming that you use the package's Router type as your handler, this is what that sequence looks like: Any of these actions could occur at any point in the chain: This package does not implement message formatting. That is to say, there are no irc.Msgf or related functions. Formatting requirements vary widely by application. Some applications will want to extend the formatting rules with their own replacement sequences to include IRC color formatting in replies. Rather than implement nonstandard rules here (and force users to look up replacements), the canonical way to write formatted replies in the style of fmt.Printf is to write your own reply helper functions. For example: Hello, #World: The following code connects to an IRC server, waits for RPL_WELCOME, then requests to join a channel called #world, waits for the server to tell us that we've joined, then sends the message "Hello!" to #world, then disconnects with the message "Goodbye.". This example uses the message router to perform more complicated message matching with an event callback style. Connects to an IRC server, joins a channel called "#world", sends the message "Hello!", then quits when CTRL+C is pressed. The simplest possible implementation of a Message handler. In this case, "simple" means it is not using package features. The code should be considered to be a "messy" implementation, but demonstrates how easy it is to get down to the protocol level, if desired.
Package uitable provides a decorator for formating data as a table
Package funcache inspired by https://docs.python.org/3/library/functools.html powered LRU cache, LFU cache ARC cache and decorator pattern
Package wayland is the root of the wayland sample repository. Simple shm demo, draws into shared memory, does not rely on the window package. Smoke demo. Reacts on mouse input, uses the window package. ImageViewer demo. Displays image file. Does not use window package, draws it's own decorations. Draws fonts in the titlebar, for this it needs the Deja Vu font fonts-dejavu. Editor demo. Currently does not work. Provides basic OS functions like the creation of anonymous temporary file (CreateAnonymousFile), Mmap, Munmap, and Socket communication. Utility functions found in the wayland-client, provided for convenience. Wrapper around the C library libxkbcommon. Used inside the window package. Needs libxkbcommon-dev for compilation and recommends libx11-data for run time operation. Implements a window model on top of wayland. Aims to be a lot like the original window.c code. Uses wl. Loads an X cursors, requires to have a cursor theme installed, for instance dmz-cursor-theme. Like cairo but not cairo. Does not depend on anything. External contains an error checking code and the swizzle function for multiple architectures. No dependencies. Unstable wayland protocols. Depends on wl. The wayland itself, does not require any external deps (except for wayland server during runtime). Stable xdg protocol. Depends on wl.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.