This is a logger implementation that supports multiple log levels, multiple output destinations with configurable formats and levels for each. It also supports granular output configuration to get more detailed logging for specific files/packages. Timber includes support for standard XML or JSON config files to get you started quickly. It's also easy to configure in code if you want to DIY. Basic use: IMPORTANT: timber has not default destination configured so log messages will be dropped until a destination is configured It can be used as a drop-in replacement for the standard logger by changing the log import statement from: to It can also be used as the output of the standard logger with Configuration in code is also simple: XML Config file: The <tag> is ignored. To configure the pattern formatter all filters accept: Pattern format specifiers (not the same as log4go!): the string number prefixes are allowed e.g.: %10s will pad the source field to 10 spaces pattern defaults to %M Both log4go synatax of <property name="format"> and new <format name=type> are supported the property syntax will only ever support the pattern formatter To configure granulars: Code Architecture: A MultiLogger <logging> which consists of many ConfigLoggers <filter>. ConfigLoggers have three properties: LogWriter <type>, Level (as a threshold) <level> and LogFormatter <format>. In practice, this means that you define ConfigLoggers with a LogWriter (where the log prints to eg. socket, file, stdio etc), the Level threshold, and a LogFormatter which formats the message before writing. Because the LogFormatters and LogWriters are simple interfaces, it is easy to write your own custom implementations. Once configured, you only deal with the "Logger" interface and use the log methods in your code The motivation for this package grew from a need to make some changes to the functionality of log4go (which had already been integrated into a larger project). I tried to maintain compatiblity with log4go for the interface and configuration. The main issue I had with log4go was that each of logger types had incisistent and incompatible configuration. I looked at contributing changes to log4go, but I would have needed to break existing use cases so I decided to do a rewrite from scratch.
Package blackfriday is a Markdown processor. It translates plain text with simple formatting rules into HTML or LaTeX. Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when EXTENSION_AUTO_HEADER_IDS is enabled. The algorithm is specified below, so that other packages can create compatible anchor names and links to those anchors. The algorithm iterates over the input text, interpreted as UTF-8, one Unicode code point (rune) at a time. All runes that are letters (category L) or numbers (category N) are considered valid characters. They are mapped to lower case, and included in the output. All other runes are considered invalid characters. Invalid characters that preceed the first valid character, as well as invalid character that follow the last valid character are dropped completely. All other sequences of invalid characters between two valid characters are replaced with a single dash character '-'. SanitizedAnchorName exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at github.com/shurcooL/sanitized_anchor_name. It can be useful for clients that want a small package and don't need full functionality of blackfriday.
Package promptui is a library providing a simple interface to create command-line prompts for go. It can be easily integrated into spf13/cobra, urfave/cli or any cli go application. promptui has two main input modes: Prompt provides a single line for user input. It supports optional live validation, confirmation and masking the input. Select provides a list of options to choose from. It supports pagination, search, detailed view and custom templates. This is an example for the Prompt mode of promptui. In this example, a prompt is created with a validator function that validates the given value to make sure its a number. If successful, it will output the chosen number in a formatted message. This is an example for the Select mode of promptui. In this example, a select is created with the days of the week as its items. When an item is selected, the selected day will be displayed in a formatted message.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package glick provides a simple plug-in environment. The central feature of glick is the Library which contains example types for the input and output of each API on the system. Each of these APIs can have a number of "actions" upon them, for example a file conversion API may have one action for each of the file formats to be convereted. Using the Run() method of glick.Library, a given API/Action combination runs the code in a function of Go type Plugin. Although it is easy to create your own plugins, there are three types built-in: Remote Procedure Calls (RPC), simple URL fetch (URL) and OS commands (CMD). A number of sub-packages simplify the use of third-party libraries when providing further types of plugin. The mapping of which plugin code to run occurs at three levels: 1) Intialisation and set-up code for the application will establish the glick.Library using glick.New(), then add API specifications using RegAPI(), it may also add the application's base plugins using RegPlugin(). 2) The base set-up can be extended and overloaded using a JSON format configuration description (probaly held in a file) by calling the Config() method of glick.Library. This configuration process is extensible, using the AddConfigurator() method - see the glick/glpie or glick/glkit sub-pakages for examples. 3) Which plugin to use can also be set-up or overloaded at runtime within Run(). Each call to a plugin includes a Context (as described in https://blog.golang.org/context). This context can contain for example user details, which could be matched against a database to see if that user should be directed to one plugin for a given action, rather than another. It could also be used to wrap every plugin call by a particular user with some other code, for example to log or meter activity.
Package blackfriday is a markdown processor. It translates plain text with simple formatting rules into an AST, which can then be further processed to HTML (provided by Blackfriday itself) or other formats (provided by the community). The simplest way to invoke Blackfriday is to call the Run function. It will take a text input and produce a text output in HTML (or other format). A slightly more sophisticated way to use Blackfriday is to create a Markdown processor and to call Parse, which returns a syntax tree for the input document. You can leverage Blackfriday's parsing for content extraction from markdown documents. You can assign a custom renderer and set various options to the Markdown processor. If you're interested in calling Blackfriday from command line, see https://github.com/russross/blackfriday-tool. Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when AutoHeadingIDs extension is enabled. The algorithm is specified below, so that other packages can create compatible anchor names and links to those anchors. The algorithm iterates over the input text, interpreted as UTF-8, one Unicode code point (rune) at a time. All runes that are letters (category L) or numbers (category N) are considered valid characters. They are mapped to lower case, and included in the output. All other runes are considered invalid characters. Invalid characters that precede the first valid character, as well as invalid character that follow the last valid character are dropped completely. All other sequences of invalid characters between two valid characters are replaced with a single dash character '-'. SanitizedAnchorName exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at github.com/shurcooL/sanitized_anchor_name. It can be useful for clients that want a small package and don't need full functionality of blackfriday.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Copyright (c) 2012 The gocql Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. This file will be the future home for more policies The uuid package can be used to generate and parse universally unique identifiers, a standardized format in the form of a 128 bit number. http://tools.ietf.org/html/rfc4122
Package csv decodes and encodes comma-separated values (CSV) files to and from arbitrary Go types. Because there are many different kinds of CSV files, this package implements the format described in RFC 4180. A CSV file may contain an optional header and zero or more records of one or more fields per record. The number of fields must be the same for each record and the optional header. The field separator is configurable and defaults to comma ',' (0x2C). Empty lines and lines starting with a comment character are ignored. The comment character is configurable as well and defaults to the number sign '#' (0x23). Records are separated by the newline character '\n' (0x0A) and the final record may or may not be followed by a newline. Carriage returns '\r' (0x0D) before newline characters are silently removed. White space is considered part of a field. Leading or trailing whitespace can optionally be trimmed when parsing a value. Fields may optionally be quoted in which case the surrounding double quotes '"' (0x22) are removed before processing. Inside a quoted field a double quote may be escaped by a preceeding second double quote which will be removed during parsing.
Package kvlog implement a key=value log formatter for the logrus logging package. It can optionally include the calling function name and line number, include constant values in every line and promote certain primary keys to the beginning of each line. All other keys are sorted into alphabetical order for easy scanning, with the human-readnable description at the end of each line. eg. This provides a format that's human-readable, yet automatically extracted by tools such as Splunk.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorKind field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package glocc implements a relatively fast, parallel counter of lines of code in files and directories. It also includes a command line tool, glocc, which is handy for performing such counting and pretty printing (brief or extensive) of the results. glocc is an aggressively parallel solution to an embarrassingly parallel problem. The count for every file and every subdirectory is assigned to a separate goroutine. All spawned goroutines are properly synchronized and their independent results are merged later, on a higher level (level = on a per-subdirectory basis). It was originally written for use with personal projects and small codebases, and also to get in touch with the Go programming language. Performance-wise, it can be further improved (and hopefully will be, when I have more time). Simply run it with any number of files or directories as command line arguments: By default, only a summary of all counted lines is printed to the standard output. To print the results extensively in a tree-like format, it can be executed with the -a flag: The results can be printed in YAML (default) or JSON format, using the -o flag: Running it with the -h flag shows all options available. For use as a package, glocc exports `func CountLoc(root string) DirResult`, which, given a root directory, returns a struct of type DirResult, a custom (recursive) type that contains the results of counting all lines of code under this root directory. It also exports EnableLogging() and DisableLogging() functions, to enable and disable verbose logging to standard error, respectively, using a package-level logger. Note that verbose logging includes details about every line of every file visited, which might be quite ...verbose, and not that useful. For now, really huge source trees, like the Linux kernel source tree, might rarely cause glocc to crash, due the big number of blocked OS threads trying to handle the huge number of goroutines spawned. To be more precise, the exact problem is reported as: It cannot occur in small and medium-sized codebases, and it's also unlikely to occur in bigger ones too. Just be warned. I plan to hack around this problem once I have the time; maybe using some kind of pool or something, or by spawning the goroutines in some clever way. As long as this note is here though, the bug is probably still around. Theoretically, a quick and dirty solution would be to increase the number of operating system threads that a Go program can use, using the SetMaxThreads() function in runtime/debug; the default value is set to 10000 threads. However, mind that (quoted from https://golang.org/pkg/runtime/debug/#SetMaxThreads): Ada, assembly, AWK, C, C++, C#, D (not the ddoc comments), Delphi, Dockerfile, Eiffel, Elixir, Erlang, Go, Haskell, HTML, Java, Javascript, JSON, Kotlin, Lisp, Makefile, Matlab, OCaml, Perl (not __END__ comments), PHP, PowerShell, Python, R, Ruby (not __END__ comments), Rust, Scala, Scheme, shell scripts, SQL, Standard ML, TeX, Tcl, YAML.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package log implements a std log compatible logging system that draws some inspiration from the Python logging module from Python's standard library. It supports multiple handlers, log levels, zero-allocation, scopes, custom formatting, and environment and runtime configuration. When not used to replace std log, the import should use the package name "analog" as in: Each Logger has a sequence of names that are used for filtering and context. Names are commonly attached as Loggers are passed into code of deeper context. The full import path of the package where a message is generated, and the short source file name and line number are added as the last 2 names for each message (applying any Msg.Skip in finding the right frame) when filtering is applied. The names are included at the end of each logging line. A sequence of rules are parsed from the environment variable with the key of EnvRules. Rules are separated by ",". Each rule is a substring of a log message name that or "*" to match any name. If there is no "=" in the rule, then all messages that match will be logged. If there is a "=", then a message must have the level following the "=", as parsed by Level.UnmarshalText or higher to be logged. Each rule is checked in order, and the last match takes precedence. This helps when you want to chain new rules on existing ones, you can always append to the end to override earlier rules. Some examples: GO_LOG=* Log everything, no matter the level GO_LOG=*=,*,hello=debug Log everything, except for any message containing a name with the substring "hello", which must be at least debug level. GO_LOG=something=info Handle messages at the info level or greater if they have a name containing "something". If no rule matches, the Logger's filter level is checked. The Default filter level is Warning. This means only messages with the level of Warning or higher will be logged, unless overridden by the specific Logger in use, or a rule from the environment matches. If the environment variable with the key EnvReportRules is not the empty string, each message logged with a previously unseen permutation of names will output a message to a standard library logger with the minimum level required to log that permutation. The message itself is then handled as usual. The same permutation will not be reported on again. This is useful to determine what logging names are in use, and to debug their reporting level thresholds.
Package mimetype uses magic number signatures to detect the MIME type of a file. File formats are stored in a hierarchy with application/octet-stream at its root. For example, the hierarchy for HTML format is application/octet-stream -> text/plain -> text/html. Pure io.Readers (meaning those without a Seek method) cannot be read twice. This means that once DetectReader has been called on an io.Reader, that reader is missing the bytes representing the header of the file. To detect the MIME type and then reuse the input, use a buffer, io.TeeReader, and io.MultiReader to create a new reader containing the original, unaltered data. If the input is an io.ReadSeeker instead, call input.Seek(0, io.SeekStart) before reusing it. Use Extend to add support for a file format which is not detected by mimetype. https://www.garykessler.net/library/file_sigs.html and https://github.com/file/file/tree/master/magic/Magdir have signatures for a multitude of file formats. Considering the definition of a binary file as "a computer file that is not a text file", they can differentiated by searching for the text/plain MIME in their MIME hierarchy.
Package couch implements a client for a CouchDB database. Version 0.1 focuses on basic operations, proper conflict management, error handling and replication. Not part of this version are attachment handling, general statistics and optimizations, change detection and creating views. Most of the features are accessible using the generic Do() function, though. Getting started: Every document in CouchDB has to be identifiable by a document id and a revision id. Two types already implement this interface called Identifiable: Doc and DynamicDoc. Doc can be used as an anonymous field in your own struct. DynamicDoc is a type alias for map[string]interface{}, use it when your documents have no implicit schema at all. To make code examples easier to follow, there will be no explicit error handling in these examples even though it's fully supported throughout the API. Insert() will create a new document if it doesn't have an id yet: After the operation the final id and revision id will be written back to p. That's why you can now just edit p and call Insert() again which will save the same document under a new revision. After this edit, p will contain the latest revision id. Note that it is possible that this second edit fails because someone else edited and saved the same document in the meantime. You will be notified of this in form of an error and you should then first retrieve the latest document revision to see the changes of this lost update: CouchDB doesn't edit documents in-place but adds a complete revision for each edit. That's why you will be correctly informed of any lost update. Because CouchDB supports multi-master replication of databases, it is possible that conflicts like the one described above can't be avoided. CouchDB is not going to interrupt replication because of a lost update. Let's say you have two instances running, maybe a central one and a mobile one and both are kept in sync by replication. Now let's assume you edit a document on your mobile DB and someone else edits the same document on the central DB. After you've come online again, you use bi-directional replication to sync the databases. CouchDB will now create a branch structure for your document, similar to version control systems. Your document has two conflicting revisions and in this case they can't necessarily be resolved automatically. This client helps you with a number of methods to resolve such an issue quickly. Read more about the conflict model http://docs.couchdb.org/en/latest/replication/conflicts.html Continuing with above example, replicate the database: Now, on the other database, edit the document (note that it has the same id there): Now edit the document on the first database. Retrieve it first to make sure it has the correct revision id: Now replicate anotherDB back to our first database: Now we have two conflicting versions of a document. Only you as the editor can decide whether "LatestAnna" or "AnotherAnna" is correct. To detect this conflict there are a number of methods. First, you can just ask a document: You probably want to have a look at the revisions in your preferred format, use Revisions() to unmarshal the revision data into a slice of a custom data type: Pick one of the revisions or create a new document to solve the conflict: That's it. You can detect conflicts like these throughout your database using: Errors returned by CouchDB will be converted into a Go error. Its regular Error() method will then return a combination of the shortform (e.g. bad_request) as well as the longer and more specific description. To be able to identify a specific error within your application, use ErrorType() to get the shortform only.
Package wav decodes and encodes wav audio files. The decoder is able to decode all wav audio formats (except extensible WAV formats), with any number of channels. These formats are: The encoder is capable of encoding any audio data -- but it currently will convert all data to 16-bit signed PCM on-the-fly before writing to a file. Ultimately this means regardless of what type of audio data you encode, it ends up as a 16-bit WAV file in the end. Future versions of this package will allow the encoder to output the same types as the decoder. Please refer to the WAV specification for in-depth details about its file format: