Package errors provides simple error handling primitives. The traditional error handling idiom in Go is roughly akin to which when applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. The errors.Wrap function returns a new error that adds context to the original error by recording a stack trace at the point Wrap is called, together with the supplied message. For example If additional control is required, the errors.WithStack and errors.WithMessage functions destructure errors.Wrap into its component operations: annotating an error with a stack trace and with a message, respectively. Using errors.Wrap constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by errors.Cause. errors.Cause will recursively retrieve the topmost error that does not implement causer, which is assumed to be the original cause. For example: Although the causer interface is not exported by this package, it is considered a part of its stable public interface. All error values returned from this package implement fmt.Formatter and can be formatted by the fmt package. The following verbs are supported: New, Errorf, Wrap, and Wrapf record a stack trace at the point they are invoked. This information can be retrieved with the following interface: The returned errors.StackTrace type is defined as The Frame type represents a call site in the stack trace. Frame supports the fmt.Formatter interface that can be used for printing information about the stack trace of this error. For example: Although the stackTracer interface is not exported by this package, it is considered a part of its stable public interface. See the documentation for Frame.Format for more details.
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions that have a name matched by regex: If Context is present, function takes context.Context argument. The context is used to pass through the Trace Context to log sinks that can make use of it. It is recommended to use the context variant of the functions over the non-context variants if a context is available to make sure the Trace Contexts are present in logs. If Depth is present, this function calls log from a different depth in the call stack. This enables a callee to emit logs that use the callsite information of its caller or any other callers in the stack. When depth == 0, the original callee's line information is emitted. When depth > 0, depth frames are skipped in the call stack and the final frame is treated like the original callee to Info. If 'f' is present, function formats according to a format specifier. This package also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done. Other flags provide aids to debugging.
Package gopacket provides packet decoding for the Go language. gopacket contains many sub-packages with additional functionality you may find useful, including: Also, if you're looking to dive right into code, see the examples subdirectory for numerous simple binaries built using gopacket libraries. Minimum go version required is 1.5 except for pcapgo/EthernetHandle, afpacket, and bsdbpf which need at least 1.7 due to x/sys/unix dependencies. gopacket takes in packet data as a []byte and decodes it into a packet with a non-zero number of "layers". Each layer corresponds to a protocol within the bytes. Once a packet has been decoded, the layers of the packet can be requested from the packet. Packets can be decoded from a number of starting points. Many of our base types implement Decoder, which allow us to decode packets for which we don't have full data. Most of the time, you won't just have a []byte of packet data lying around. Instead, you'll want to read packets in from somewhere (file, interface, etc) and process them. To do that, you'll want to build a PacketSource. First, you'll need to construct an object that implements the PacketDataSource interface. There are implementations of this interface bundled with gopacket in the gopacket/pcap and gopacket/pfring subpackages... see their documentation for more information on their usage. Once you have a PacketDataSource, you can pass it into NewPacketSource, along with a Decoder of your choice, to create a PacketSource. Once you have a PacketSource, you can read packets from it in multiple ways. See the docs for PacketSource for more details. The easiest method is the Packets function, which returns a channel, then asynchronously writes new packets into that channel, closing the channel if the packetSource hits an end-of-file. You can change the decoding options of the packetSource by setting fields in packetSource.DecodeOptions... see the following sections for more details. gopacket optionally decodes packet data lazily, meaning it only decodes a packet layer when it needs to handle a function call. Lazily-decoded packets are not concurrency-safe. Since layers have not all been decoded, each call to Layer() or Layers() has the potential to mutate the packet in order to decode the next layer. If a packet is used in multiple goroutines concurrently, don't use gopacket.Lazy. Then gopacket will decode the packet fully, and all future function calls won't mutate the object. By default, gopacket will copy the slice passed to NewPacket and store the copy within the packet, so future mutations to the bytes underlying the slice don't affect the packet and its layers. If you can guarantee that the underlying slice bytes won't be changed, you can use NoCopy to tell gopacket.NewPacket, and it'll use the passed-in slice itself. The fastest method of decoding is to use both Lazy and NoCopy, but note from the many caveats above that for some implementations either or both may be dangerous. During decoding, certain layers are stored in the packet as well-known layer types. For example, IPv4 and IPv6 are both considered NetworkLayer layers, while TCP and UDP are both TransportLayer layers. We support 4 layers, corresponding to the 4 layers of the TCP/IP layering scheme (roughly anagalous to layers 2, 3, 4, and 7 of the OSI model). To access these, you can use the packet.LinkLayer, packet.NetworkLayer, packet.TransportLayer, and packet.ApplicationLayer functions. Each of these functions returns a corresponding interface (gopacket.{Link,Network,Transport,Application}Layer). The first three provide methods for getting src/dst addresses for that particular layer, while the final layer provides a Payload function to get payload data. This is helpful, for example, to get payloads for all packets regardless of their underlying data type: A particularly useful layer is ErrorLayer, which is set whenever there's an error parsing part of the packet. Note that we don't return an error from NewPacket because we may have decoded a number of layers successfully before running into our erroneous layer. You may still be able to get your Ethernet and IPv4 layers correctly, even if your TCP layer is malformed. gopacket has two useful objects, Flow and Endpoint, for communicating in a protocol independent manner the fact that a packet is coming from A and going to B. The general layer types LinkLayer, NetworkLayer, and TransportLayer all provide methods for extracting their flow information, without worrying about the type of the underlying Layer. A Flow is a simple object made up of a set of two Endpoints, one source and one destination. It details the sender and receiver of the Layer of the Packet. An Endpoint is a hashable representation of a source or destination. For example, for LayerTypeIPv4, an Endpoint contains the IP address bytes for a v4 IP packet. A Flow can be broken into Endpoints, and Endpoints can be combined into Flows: Both Endpoint and Flow objects can be used as map keys, and the equality operator can compare them, so you can easily group together all packets based on endpoint criteria: For load-balancing purposes, both Flow and Endpoint have FastHash() functions, which provide quick, non-cryptographic hashes of their contents. Of particular importance is the fact that Flow FastHash() is symmetric: A->B will have the same hash as B->A. An example usage could be: This allows us to split up a packet stream while still making sure that each stream sees all packets for a flow (and its bidirectional opposite). If your network has some strange encapsulation, you can implement your own decoder. In this example, we handle Ethernet packets which are encapsulated in a 4-byte header. See the docs for Decoder and PacketBuilder for more details on how coding decoders works, or look at RegisterLayerType and RegisterEndpointType to see how to add layer/endpoint types to gopacket. TLDR: DecodingLayerParser takes about 10% of the time as NewPacket to decode packet data, but only for known packet stacks. Basic decoding using gopacket.NewPacket or PacketSource.Packets is somewhat slow due to its need to allocate a new packet and every respective layer. It's very versatile and can handle all known layer types, but sometimes you really only care about a specific set of layers regardless, so that versatility is wasted. DecodingLayerParser avoids memory allocation altogether by decoding packet layers directly into preallocated objects, which you can then reference to get the packet's information. A quick example: The important thing to note here is that the parser is modifying the passed in layers (eth, ip4, ip6, tcp) instead of allocating new ones, thus greatly speeding up the decoding process. It's even branching based on layer type... it'll handle an (eth, ip4, tcp) or (eth, ip6, tcp) stack. However, it won't handle any other type... since no other decoders were passed in, an (eth, ip4, udp) stack will stop decoding after ip4, and only pass back [LayerTypeEthernet, LayerTypeIPv4] through the 'decoded' slice (along with an error saying it can't decode a UDP packet). Unfortunately, not all layers can be used by DecodingLayerParser... only those implementing the DecodingLayer interface are usable. Also, it's possible to create DecodingLayers that are not themselves Layers... see layers.IPv6ExtensionSkipper for an example of this. By default, DecodingLayerParser uses native map to store and search for a layer to decode. Though being versatile, in some cases this solution may be not so optimal. For example, if you have only few layers faster operations may be provided by sparse array indexing or linear array scan. To accomodate these scenarios, DecodingLayerContainer interface is introduced along with its implementations: DecodingLayerSparse, DecodingLayerArray and DecodingLayerMap. You can specify a container implementation to DecodingLayerParser with SetDecodingLayerContainer method. Example: To skip one level of indirection (though sacrificing some capabilities) you may also use DecodingLayerContainer as a decoding tool as it is. In this case you have to handle unknown layer types and layer panics by yourself. Example: DecodingLayerSparse is the fastest but most effective when LayerType values that layers in use can decode are not large because otherwise that would lead to bigger memory footprint. DecodingLayerArray is very compact and primarily usable if the number of decoding layers is not big (up to ~10-15, but please do your own benchmarks). DecodingLayerMap is the most versatile one and used by DecodingLayerParser by default. Please refer to tests and benchmarks in layers subpackage to further examine usage examples and performance measurements. You may also choose to implement your own DecodingLayerContainer if you want to make use of your own internal packet decoding logic. As well as offering the ability to decode packet data, gopacket will allow you to create packets from scratch, as well. A number of gopacket layers implement the SerializableLayer interface; these layers can be serialized to a []byte in the following manner: SerializeTo PREPENDS the given layer onto the SerializeBuffer, and they treat the current buffer's Bytes() slice as the payload of the serializing layer. Therefore, you can serialize an entire packet by serializing a set of layers in reverse order (Payload, then TCP, then IP, then Ethernet, for example). The SerializeBuffer's SerializeLayers function is a helper that does exactly that. To generate a (empty and useless, because no fields are set) Ethernet(IPv4(TCP(Payload))) packet, for example, you can run: If you use gopacket, you'll almost definitely want to make sure gopacket/layers is imported, since when imported it sets all the LayerType variables and fills in a lot of interesting variables/maps (DecodersByLayerName, etc). Therefore, it's recommended that even if you don't use any layers functions directly, you still import with:
Package echo implements high performance, minimalist Go web framework. Example: Learn more at https://echo.labstack.com
Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Application.Stop function is called or when Ctrl-C is pressed. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, styles are specified using the tcell.Style type. Styles specify colors with the tcell.Color type. Functions such as tcell.GetColor, tcell.NewHexColor, and tcell.NewRGBColor can be used to create colors from W3C color names or RGB values. The tcell.Style type also allows you to specify text attributes such as "bold" or "underline" or a URL which some terminals use to display hyperlinks. Almost all strings which are displayed may contain style tags. A style tag's content is always wrapped in square brackets. In its simplest form, a style tag specifies the foreground color of the text. Colors in these tags are W3C color names or six hexadecimal digits following a hash tag. Examples: A style tag changes the style of the characters following that style tag. There is no style stack and no nesting of style tags. Style tags are used in almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. A style tag's full format looks like this: Each of the four fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered style tags.) Fields that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags to turn on certain attributes (some flags may not be supported by your terminal): Use uppercase letters to turn off the corresponding attribute, for example, "B" to turn off bold. Uppercase letters have no effect if the attribute was not previously set. Setting a URL allows you to turn a piece of text into a hyperlink in some terminals. Specify a dash ("-") to specify the end of the hyperlink. Hyperlinks must only contain single-byte characters (e.g. ASCII) and they may not contain bracket characters ("[" or "]"). Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. Note that most terminals will not report information about their color theme. This package therefore does not support using the terminal's color theme. The default style is a dark theme and you must change the Styles variable to switch to a light (or other) theme. This package supports all unicode characters supported by your terminal. If your terminal supports mouse events, you can enable mouse support for your application by calling Application.EnableMouse. Note that this may interfere with your terminal's default mouse behavior. Mouse support is disabled by default. Many functions in this package are not thread-safe. For many applications, this is not an issue: If your code makes changes in response to key events, the corresponding callback function will execute in the main goroutine and thus will not cause any race conditions. (Exceptions to this are documented.) If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate or Application.QueueUpdateDraw (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw from any goroutine without having to wrap it in Application.QueueUpdate. And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use Application.QueueUpdate as that may lead to deadlocks. It is also not necessary to call Application.Draw from such callbacks as it will be called automatically. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. Please note that if you are using the functions of Box on a subclass, they will return a *Box, not the subclass. This is a Golang limitation. So while tview supports method chaining in many places, these chains must be broken when using Box's functions. Example: You will need to call Box.SetBorder separately: All widgets also implement the Primitive interface. The tview package's rendering is based on version 2 of https://github.com/gdamore/tcell. It uses types and constants from that package (e.g. colors, styles, and keyboard values).
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package echo provides functions to trace the labstack/echo package (https://github.com/labstack/echo). Currently only the routing of a received message can be instrumented. To do so, use the Middleware function.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Package stack is a pluggable framework for microservices
Package websocket implements the RFC 6455 WebSocket protocol. Deprecated: coder now maintains this library at https://github.com/coder/websocket. https://tools.ietf.org/html/rfc6455 Use Dial to dial a WebSocket server. Use Accept to accept a WebSocket client. Conn represents the resulting WebSocket connection. The examples are the best way to understand how to correctly use the library. The wsjson subpackage contain helpers for JSON and protobuf messages. More documentation at https://nhooyr.io/websocket. The client side supports compiling to Wasm. It wraps the WebSocket browser API. See https://developer.mozilla.org/en-US/docs/Web/API/WebSocket Some important caveats to be aware of: This example demonstrates a echo server. This example demonstrates full stack chat with an automated test.
Package cloudformation provides the API client, operations, and parameter types for AWS CloudFormation. CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Amazon EC2 Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure. With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you. For more information about CloudFormation, see the CloudFormation product page. CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.
Package wmi provides a WQL interface for WMI on Windows. Example code to print names of running processes:
Package appdash provides a Go app performance tracing suite. Appdash allows you to trace the end-to-end performance of hierarchically structured applications. You can, for example, measure the time and see the detailed information of each HTTP request and SQL query made by an entire distributed web application. The cmd/appdash tool launches a web front-end which displays a web UI for viewing collected app traces. It is effectively a remote collector which your application can connect and send events to. Timing and application-specific metadata information can be viewed in a nice timeline view for each span (e.g. HTTP request) and it's children. The web front-end can also be embedded in your own Go HTTP server by utilizing the traceapp sub-package, which is effectively what cmd/appdash serves internally. Sub-packages for HTTP and SQL event tracing are provided for use with appdash, which allows it to function equivalently to Google's Dapper and Twitter's Zipkin performance tracing suites. The most high-level structure is a Trace, which represents the performance of an application from start to finish (in an HTTP application, for example, the loading of a web page). A Trace is a tree structure that is made up of several spans, which are just IDs (in an HTTP application, these ID's are passed through the stack via a few special headers). Each span ID has a set of Events that directly correspond to it inside a Collector. These events can be any combination of message, log, time-span, or time-stamped events (the cmd/appdash web UI displays these events as appropriate). Inside your application, a Recorder is used to send events to a Collector, which can be a remote HTTP(S) collector, a local in-memory or persistent collector, etc. Additionally, you can implement the Collector interface yourself and store events however you like.
KCLVM binding for Go
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
panicparse: analyzes stack dump of Go processes and simplifies it. It is mostly useful on servers will large number of identical goroutines, making the crash dump harder to read than strictly necessary.
Package errors provides an easy way to annotate errors without losing the original error context. The exported `New` and `Errorf` functions are designed to replace the `errors.New` and `fmt.Errorf` functions respectively. The same underlying error is there, but the package also records the location at which the error was created. A primary use case for this library is to add extra context any time an error is returned from a function. This instead becomes: which just records the file and line number of the Trace call, or which also adds an annotation to the error. When you want to check to see if an error is of a particular type, a helper function is normally exported by the package that returned the error, like the `os` package does. The underlying cause of the error is available using the `Cause` function. The result of the `Error()` call on an annotated error is the annotations joined with colons, then the result of the `Error()` method for the underlying error that was the cause. Obviously recording the file, line and functions is not very useful if you cannot get them back out again. will return something like: The first error was generated by an external system, so there was no location associated. The second, fourth, and last lines were generated with Trace calls, and the other two through Annotate. Sometimes when responding to an error you want to return a more specific error for the situation. This returns an error where the complete error stack is still available, and `errors.Cause()` will return the `NotFound` error.