bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are can be easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy function. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package sqlite implements a database/sql driver for SQLite3. This driver requires a file: URI always be used to open a database. For details see https://sqlite.org/c3ref/open.html#urifilenames. If you want to do initial configuration of a connection, or enable tracing, use the Connector function: In-memory databases are popular for tests. Use the "memdb" VFS (*not* the legacy in-memory modes) to be compatible with the database/sql connection pool: Use a different dbname for each memory database opened. SQLite is flexible about type conversions, and so is this driver. Almost all "basic" Go types (int, float64, string) are accepted and directly mapped into SQLite, even if they are named Go types. The time.Time type is also accepted (described below). Values that implement encoding.TextMarshaler or json.Marshaler are stored in SQLite in their marshaled form. While SQLite3 has no strict time datatype, it does have a series of built-in functions that operate on timestamps that expect columns to be in one of many formats: https://sqlite.org/lang_datefunc.html When encoding a time.Time into one of SQLite's preferred formats, we use the shortest timestamp format that can accurately represent the time.Time. The supported formats are: If the time.Time is not UTC (strongly consider storing times in UTC!), we follow SQLite's norm of appending "[+-]HH:MM" to the above formats. It is common in SQLite to store "Unix time", seconds-since-epoch in an INTEGER column. This is understood by the date and time functions documented in the link above. If you want to do that, pass the result of time.Time.Unix to the driver. In general, time is hard to extract from SQLite as a time.Time. If a column is defined as DATE or DATETIME, then text data is parsed as TimeFormat and returned as a time.Time. Integer data is parsed as seconds since epoch and returned as a time.Time.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package klog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to standard error. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package klog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to standard error. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package capnp is a Cap'n Proto library for Go. https://capnproto.org/ Read the Getting Started guide for a tutorial on how to use this package. https://github.com/capnproto/go-capnproto2/wiki/Getting-Started capnpc-go provides the compiler backend for capnp. capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. For the following schema: capnpc-go will generate: For each group a typedef is created with a different method set for just the groups fields: generates the following: That way the following may be used to access a field in a group: Note that group accessors just convert the type and so have no overhead. Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: generates the following: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: generates the following: Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: and in usage: capnpc-go generates enum values as constants. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file: capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): capnpc-go also generates code to implement the interface: Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: A note about message ordering: when implementing a server method, you are responsible for acknowledging delivery of a method call. Failure to do so can cause deadlocks. See the server.Ack function for more details.
Package capnp is a Cap'n Proto library for Go. https://capnproto.org/ Read the Getting Started guide for a tutorial on how to use this package. https://github.com/capnproto/go-capnproto2/wiki/Getting-Started capnpc-go provides the compiler backend for capnp. capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. For the following schema: capnpc-go will generate: For each group a typedef is created with a different method set for just the groups fields: generates the following: That way the following may be used to access a field in a group: Note that group accessors just convert the type and so have no overhead. Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: generates the following: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: generates the following: Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: and in usage: capnpc-go generates enum values as constants. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file: capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): capnpc-go also generates code to implement the interface: Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: A note about message ordering: when implementing a server method, you are responsible for acknowledging delivery of a method call. Failure to do so can cause deadlocks. See the server.Ack function for more details.
Golex is a lex/flex like (not fully POSIX lex compatible) utility. It renders .l formated data (http://flex.sourceforge.net/manual/Format.html#Format) to Go source code. The .l data can come from a file named in a command line argument. If no non-opt args are given, golex reads stdin. Options: To get the latest golex version: Please see http://godoc.org/modernc.org/golex/lex. 2014-11-18: Golex now supports %yym - a hook which can be used to mark an accepting state. Consider for example this .l file: Execution and output: 2014-11-15: Golex's output is now gofmt'ed, if possible. Missing/differing functionality of the current renderer (compared to flex): Further limitations on the .l source are listed in the cznic/lex package godocs. A simple golex program example (make example1 && ./example1):
Package pgzip implements reading and writing of gzip format compressed files, as specified in RFC 1952. This is a drop in replacement for "compress/gzip". This will split compression into blocks that are compressed in parallel. This can be useful for compressing big amounts of data. The gzip decompression has not been modified, but remains in the package, so you can use it as a complete replacement for "compress/gzip". See more at https://github.com/klauspost/pgzip
Package awk implements AWK-style processing of input streams. The awk package can be considered a shallow EDSL (embedded domain-specific language) for Go that facilitates text processing. It aims to implement the core semantics provided by AWK, a pattern scanning and processing language defined as part of the POSIX 1003.1 standard (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) and therefore part of all standard Linux/Unix distributions. AWK's forte is simple transformations of tabular data. For example, the following is a complete AWK program that reads an entire file from the standard input device, splits each file into whitespace-separated columns, and outputs all lines in which the fifth column is an odd number: Here's a typical Go analogue of that one-line AWK program: The goal of the awk package is to emulate AWK's simplicity while simultaneously taking advantage of Go's speed, safety, and flexibility. With the awk package, the preceding code reduces to the following: While not a one-liner like the original AWK program, the above is conceptually close to it. The AppendStmt method defines a script in terms of patterns and actions exactly as in the AWK program. The Run method then runs the script on an input stream, which can be any io.Reader. For those programmers unfamiliar with AWK, an AWK program consists of a sequence of pattern/action pairs. Each pattern that matches a given line causes the corresponding action to be performed. AWK programs tend to be terse because AWK implicitly reads the input file, splits it into records (default: newline-terminated lines), and splits each record into fields (default: whitespace-separated columns), saving the programmer from having to express such operations explicitly. Furthermore, AWK provides a default pattern, which matches every record, and a default action, which outputs a record unmodified. The awk package attempts to mimic those semantics in Go. Basic usage consists of three steps: 1. Script allocation (awk.NewScript) 2. Script definition (Script.AppendStmt) 3. Script execution (Script.Run) In Step 2, AppendStmt is called once for each pattern/action pair that is to be appended to the script. The same script can be applied to multiple input streams by re-executing Step 3. Actions to be executed on every run of Step 3 can be supplied by assigning the script's Begin and End fields. The Begin action is typically used to initialize script state by calling methods such as SetRS and SetFS and assigning user-defined data to the script's State field (what would be global variables in AWK). The End action is typically used to store or report final results. To mimic AWK's dynamic type system. the awk package provides the Value and ValueArray types. Value represents a scalar that can be coerced without error to a string, an int, or a float64. ValueArray represents a—possibly multidimensional—associative array of Values. Both patterns and actions can access the current record's fields via the script's F method, which takes a 1-based index and returns the corresponding field as a Value. An index of 0 returns the entire record as a Value. The following AWK features and GNU AWK extensions are currently supported by the awk package: • the basic pattern/action structure of an AWK script, including BEGIN and END rules and range patterns • control over record separation (RS), including regular expressions and null strings (implying blank lines as separators) • control over field separation (FS), including regular expressions and null strings (implying single-character fields) • fixed-width fields (FIELDWIDTHS) • fields defined by a regular expression (FPAT) • control over case-sensitive vs. case-insensitive comparisons (IGNORECASE) • control over the number conversion format (CONVFMT) • automatic enumeration of records (NR) and fields (NR) • "weak typing" • multidimensional associative arrays • premature termination of record processing (next) and script processing (exit) • explicit record reading (getline) from either the current stream or a specified stream • maintenance of regular-expression status variables (RT, RSTART, and RLENGTH) For more information about AWK and its features, see the awk(1) manual page on any Linux/Unix system (available online from, e.g., http://linux.die.net/man/1/awk) or read the book, "The AWK Programming Language" by Aho, Kernighan, and Weinberger. A number of examples ported from the POSIX 1003.1 standard document (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) are presented below.
Package capnp is a Cap'n Proto library for Go. https://capnproto.org/ Read the Getting Started guide for a tutorial on how to use this package. https://github.com/capnproto/go-capnproto2/wiki/Getting-Started capnpc-go provides the compiler backend for capnp. capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. For the following schema: capnpc-go will generate: For each group a typedef is created with a different method set for just the groups fields: generates the following: That way the following may be used to access a field in a group: Note that group accessors just convert the type and so have no overhead. Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: generates the following: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: generates the following: Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: and in usage: capnpc-go generates enum values as constants. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file: capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): capnpc-go also generates code to implement the interface: Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: A note about message ordering: when implementing a server method, you are responsible for acknowledging delivery of a method call. Failure to do so can cause deadlocks. See the server.Ack function for more details.
gvt, a simple go vendoring tool based on gb-vendor. Usage: The commands are: Use "gvt help [command]" for more information about a command. Usage: fetch vendors an upstream import path. Recursive dependencies are fetched (at their master/tip/HEAD revision), unless they or their parent package are already present. If a subpackage of a dependency being fetched is already present, it will be deleted. The import path may include a url scheme. This may be useful when fetching dependencies from private repositories that cannot be probed. Flags: Usage: restore fetches the dependencies listed in the manifest. It's meant for workflows that don't include checking in to VCS the vendored source, for example if .gitignore includes lines like Note that such a setup requires "gvt restore" to build the source, relies on the availability of the dependencies repositories and breaks "go get". Flags: Usage: update replaces the source with the latest available from the head of the fetched branch. Updating from one copy of a dependency to another is ONLY possible when the dependency was fetched by branch, without using -tag or -revision. It will be updated to the HEAD of that branch, switching branches is not supported. To update across branches, or from one tag/revision to another, you must first use delete to remove the dependency, then fetch [ -tag | -revision | -branch ] to replace it. Flags: Usage: list formats the contents of the manifest file. Flags: Usage: delete removes a dependency from the vendor directory and the manifest Flags:
gvt, a simple go vendoring tool based on gb-vendor. Usage: The commands are: Use "gvt help [command]" for more information about a command. Usage: fetch vendors an upstream import path. Recursive dependencies are fetched (at their master/tip/HEAD revision), unless they or their parent package are already present. If a subpackage of a dependency being fetched is already present, it will be deleted. The import path may include a url scheme. This may be useful when fetching dependencies from private repositories that cannot be probed. Flags: Usage: restore fetches the dependencies listed in the manifest. It's meant for workflows that don't include checking in to VCS the vendored source, for example if .gitignore includes lines like Note that such a setup requires "gvt restore" to build the source, relies on the availability of the dependencies repositories and breaks "go get". Flags: Usage: update replaces the source with the latest available from the head of the fetched branch. Updating from one copy of a dependency to another is ONLY possible when the dependency was fetched by branch, without using -tag or -revision. It will be updated to the HEAD of that branch, switching branches is not supported. To update across branches, or from one tag/revision to another, you must first use delete to remove the dependency, then fetch [ -tag | -revision | -branch ] to replace it. Flags: Usage: list formats the contents of the manifest file. Flags: Usage: delete removes a dependency from the vendor directory and the manifest Flags:
Package cleanenv gives you a single tool to read application configuration from several sources with ease. - read from several file formats (YAML, JSON, TOML, ENV, EDN) and parse into the internal structure; - read environment variables into the internal structure; - output environment variable list with descriptions into help output; - custom variable readers (e.g. if you want to read from remote config server, etc). You can just prepare the config structure and fill it from the config file and environment variables. You can list all of your environment variables by means of help output: Then run go run main.go -h and the output will include: For more detailed information check examples and example tests. Example_setter uses type with a custom setter to parse environment variable data Example_updater uses a type with a custom updater
video-transcoding-api HTTP API for transcoding media files into different formats using pluggable providers. ## Currently supported providers + [Amazon Elastic Transcoder](https://aws.amazon.com/elastictranscoder/) + [Elemental Conductor](https://www.elementaltechnologies.com/products/elemental-conductor) + [Encoding.com](http://api.encoding.com) + [Zencoder](http://zencoder.com) Schemes: http BasePath: / Version: 1.0.0 License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0.html Consumes: - application/json Produces: - application/json swagger:meta
Command pigeon generates parsers in Go from a PEG grammar. From Wikipedia [0]: Its features and syntax are inspired by the PEG.js project [1], while the implementation is loosely based on [2]. Formal presentation of the PEG theory by Bryan Ford is also an important reference [3]. An introductory blog post can be found at [4]. The pigeon tool must be called with PEG input as defined by the accepted PEG syntax below. The grammar may be provided by a file or read from stdin. The generated parser is written to stdout by default. The following options can be specified: The tool makes no attempt to format the code, nor to detect the required imports. It is recommended to use goimports to properly generate the output code: The goimports tool can be installed with: If the code blocks in the grammar (see below, section "Code block") are golint- and go vet-compliant, then the resulting generated code will also be golint- and go vet-compliant. The generated code doesn't use any third-party dependency unless code blocks in the grammar require such a dependency. The accepted syntax for the grammar is formally defined in the grammar/pigeon.peg file, using the PEG syntax. What follows is an informal description of this syntax. Identifiers, whitespace, comments and literals follow the same notation as the Go language, as defined in the language specification (http://golang.org/ref/spec#Source_code_representation): The grammar must be Unicode text encoded in UTF-8. New lines are identified by the \n character (U+000A). Space (U+0020), horizontal tabs (U+0009) and carriage returns (U+000D) are considered whitespace and are ignored except to separate tokens. A PEG grammar consists of a set of rules. A rule is an identifier followed by a rule definition operator and an expression. An optional display name - a string literal used in error messages instead of the rule identifier - can be specified after the rule identifier. E.g.: The rule definition operator can be any one of those: A rule is defined by an expression. The following sections describe the various expression types. Expressions can be grouped by using parentheses, and a rule can be referenced by its identifier in place of an expression. The choice expression is a list of expressions that will be tested in the order they are defined. The first one that matches will be used. Expressions are separated by the forward slash character "/". E.g.: Because the first match is used, it is important to think about the order of expressions. For example, in this rule, "<=" would never be used because the "<" expression comes first: The sequence expression is a list of expressions that must all match in that same order for the sequence expression to be considered a match. Expressions are separated by whitespace. E.g.: A labeled expression consists of an identifier followed by a colon ":" and an expression. A labeled expression introduces a variable named with the label that can be referenced in the code blocks in the same scope. The variable will have the value of the expression that follows the colon. E.g.: The variable is typed as an empty interface, and the underlying type depends on the following: For terminals (character and string literals, character classes and the any matcher), the value is []byte. E.g.: For predicates (& and !), the value is always nil. E.g.: For a sequence, the value is a slice of empty interfaces, one for each expression value in the sequence. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a repetition (+ and *), the value is a slice of empty interfaces, one for each repetition. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a choice expression, the value is that of the matching choice. E.g.: For the optional expression (?), the value is nil or the value of the expression. E.g.: Of course, the type of the value can be anything once an action code block is used. E.g.: An expression prefixed with the ampersand "&" is the "and" predicate expression: it is considered a match if the following expression is a match, but it does not consume any input. An expression prefixed with the exclamation point "!" is the "not" predicate expression: it is considered a match if the following expression is not a match, but it does not consume any input. E.g.: The expression following the & and ! operators can be a code block. In that case, the code block must return a bool and an error. The operator's semantic is the same, & is a match if the code block returns true, ! is a match if the code block returns false. The code block has access to any labeled value defined in its scope. E.g.: An expression followed by "*", "?" or "+" is a match if the expression occurs zero or more times ("*"), zero or one time "?" or one or more times ("+") respectively. The match is greedy, it will match as many times as possible. E.g. A literal matcher tries to match the input against a single character or a string literal. The literal may be a single-quoted single character, a double-quoted string or a backtick-quoted raw string. The same rules as in Go apply regarding the allowed characters and escapes. The literal may be followed by a lowercase "i" (outside the ending quote) to indicate that the match is case-insensitive. E.g.: A character class matcher tries to match the input against a class of characters inside square brackets "[...]". Inside the brackets, characters represent themselves and the same escapes as in string literals are available, except that the single- and double-quote escape is not valid, instead the closing square bracket "]" must be escaped to be used. Character ranges can be specified using the "[a-z]" notation. Unicode classes can be specified using the "[\pL]" notation, where L is a single-letter Unicode class of characters, or using the "[\p{Class}]" notation where Class is a valid Unicode class (e.g. "Latin"). As for string literals, a lowercase "i" may follow the matcher (outside the ending square bracket) to indicate that the match is case-insensitive. A "^" as first character inside the square brackets indicates that the match is inverted (it is a match if the input does not match the character class matcher). E.g.: The any matcher is represented by the dot ".". It matches any character except the end of file, thus the "!." expression is used to indicate "match the end of file". E.g.: Code blocks can be added to generate custom Go code. There are three kinds of code blocks: the initializer, the action and the predicate. All code blocks appear inside curly braces "{...}". The initializer must appear first in the grammar, before any rule. It is copied as-is (minus the wrapping curly braces) at the top of the generated parser. It may contain function declarations, types, variables, etc. just like any Go file. Every symbol declared here will be available to all other code blocks. Although the initializer is optional in a valid grammar, it is usually required to generate a valid Go source code file (for the package clause). E.g.: Action code blocks are code blocks declared after an expression in a rule. Those code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as interface{}) and must return two values, the first being the value of the expression (an interface{}), and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: Predicate code blocks are code blocks declared immediately after the and "&" or the not "!" operators. Like action code blocks, predicate code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as interface{}) and must return two values, the first being a bool and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: The current type is a struct that provides two useful fields that can be accessed in action and predicate code blocks: "pos" and "text". The "pos" field indicates the current position of the parser in the source input. It is itself a struct with three fields: "line", "col" and "offset". Line is a 1-based line number, col is a 1-based column number that counts runes from the start of the line, and offset is a 0-based byte offset. The "text" field is the slice of bytes of the current match. It is empty in a predicate code block. The parser generated by pigeon exports a few symbols so that it can be used as a package with public functions to parse input text. The exported API is: See the godoc page of the generated parser for the test/predicates grammar for an example documentation page of the exported API: http://godoc.org/github.com/mna/pigeon/test/predicates. Like the grammar used to generate the parser, the input text must be UTF-8-encoded Unicode. The start rule of the parser is the first rule in the PEG grammar used to generate the parser. A call to any of the Parse* functions returns the value generated by executing the grammar on the provided input text, and an optional error. Typically, the grammar should generate some kind of abstract syntax tree (AST), but for simple grammars it may evaluate the result immediately, such as in the examples/calculator example. There are no constraints imposed on the author of the grammar, it can return whatever is needed. When the parser returns a non-nil error, the error is always of type errList, which is defined as a slice of errors ([]error). Each error in the list is of type *parserError. This is a struct that has an "Inner" field that can be used to access the original error. So if a code block returns some well-known error like: The original error can be accessed this way: By defaut the parser will continue after an error is returned and will cumulate all errors found during parsing. If the grammar reaches a point where it shouldn't continue, a panic statement can be used to terminate parsing. The panic will be caught at the top-level of the Parse* call and will be converted into a *parserError like any error, and an errList will still be returned to the caller. The divide by zero error in the examples/calculator grammar leverages this feature (no special code is needed to handle division by zero, if it happens, the runtime panics and it is recovered and returned as a parsing error). Providing good error reporting in a parser is not a trivial task. Part of it is provided by the pigeon tool, by offering features such as filename, position and rule name in the error message, but an important part of good error reporting needs to be done by the grammar author. For example, many programming languages use double-quotes for string literals. Usually, if the opening quote is found, the closing quote is expected, and if none is found, there won't be any other rule that will match, there's no need to backtrack and try other choices, an error should be added to the list and the match should be consumed. In order to do this, the grammar can look something like this: This is just one example, but it illustrates the idea that error reporting needs to be thought out when designing the grammar. Generated parsers have user-provided code mixed with pigeon code in the same package, so there is no package boundary in the resulting code to prevent access to unexported symbols. What is meant to be implementation details in pigeon is also available to user code - which doesn't mean it should be used. For this reason, it is important to precisely define what is intended to be the supported API of pigeon, the parts that will be stable in future versions. The "stability" of the API attempts to make a similar guarantee as the Go 1 compatibility [5]. The following lists what part of the current pigeon code falls under that guarantee (features may be added in the future): The pigeon command-line flags and arguments: those will not be removed and will maintain the same semantics. The explicitly exported API generated by pigeon. See [6] for the documentation of this API on a generated parser. The PEG syntax, as documented above. The code blocks (except the initializer) will always be generated as methods on the *current type, and this type is guaranteed to have the fields pos (type position) and text (type []byte). There are no guarantees on other fields and methods of this type. The position type will always have the fields line, col and offset, all defined as int. There are no guarantees on other fields and methods of this type. The type of the error value returned by the Parse* functions, when not nil, will always be errList defined as a []error. There are no guarantees on methods of this type, other than the fact it implements the error interface. Individual errors in the errList will always be of type *parserError, and this type is guaranteed to have an Inner field that contains the original error value. There are no guarantees on other fields and methods of this type. References:
Command mox is a modern, secure, full-featured, open source mail server for low-maintenance self-hosted email. Mox is started with the "serve" subcommand, but mox also has many other subcommands. Many of those commands talk to a running mox instance, through the ctl file in the data directory. Specify the configuration file (that holds the path to the data directory) through the -config flag or MOXCONF environment variable. Commands that don't talk to a running mox instance are often for testing/debugging email functionality. For example for parsing an email message, or looking up SPF/DKIM/DMARC records. Below is the usage information as printed by the command when started without any parameters. Followed by the help and usage information for each command. Start mox, serving SMTP/IMAP/HTTPS. Incoming email is accepted over SMTP. Email can be retrieved by users using IMAP. HTTP listeners are started for the admin/account web interfaces, and for automated TLS configuration. Missing essential TLS certificates are immediately requested, other TLS certificates are requested on demand. Only implemented on unix systems, not Windows. Quickstart generates configuration files and prints instructions to quickly set up a mox instance. Quickstart writes configuration files, prints initial admin and account passwords, DNS records you should create. If you run it on Linux it writes a systemd service file and prints commands to enable and start mox as service. All output is written to quickstart.log for later reference. The user or uid is optional, defaults to "mox", and is the user or uid/gid mox will run as after initialization. Quickstart assumes mox will run on the machine you run quickstart on and uses its host name and public IPs. On many systems the hostname is not a fully qualified domain name, but only the first dns "label", e.g. "mail" in case of "mail.example.org". If so, quickstart does a reverse DNS lookup to find the hostname, and as fallback uses the label plus the domain of the email address you specified. Use flag -hostname to explicitly specify the hostname mox will run on. Mox is by far easiest to operate if you let it listen on port 443 (HTTPS) and 80 (HTTP). TLS will be fully automatic with ACME with Let's Encrypt. You can run mox along with an existing webserver, but because of MTA-STS and autoconfig, you'll need to forward HTTPS traffic for two domains to mox. Run "mox quickstart -existing-webserver ..." to generate configuration files and instructions for configuring mox along with an existing webserver. But please first consider configuring mox on port 443. It can itself serve domains with HTTP/HTTPS, including with automatic TLS with ACME, is easily configured through both configuration files and admin web interface, and can act as a reverse proxy (and static file server for that matter), so you can forward traffic to your existing backend applications. Look for "WebHandlers:" in the output of "mox config describe-domains" and see the output of "mox config example webhandlers". Shut mox down, giving connections maximum 3 seconds to stop before closing them. While shutting down, new IMAP and SMTP connections will get a status response indicating temporary unavailability. Existing connections will get a 3 second period to finish their transaction and shut down. Under normal circumstances, only IMAP has long-living connections, with the IDLE command to get notified of new mail deliveries. Set new password an account. The password is read from stdin. Secrets derived from the password, but not the password itself, are stored in the account database. The stored secrets are for authentication with: scram-sha-256, scram-sha-1, cram-md5, plain text (bcrypt hash). The parameter is an account name, as configured under Accounts in domains.conf and as present in the data/accounts/ directory, not a configured email address for an account. Set a new admin password, for the web interface. The password is read from stdin. Its bcrypt hash is stored in a file named "adminpasswd" in the configuration directory. Print the log levels, or set a new default log level, or a level for the given package. By default, a single log level applies to all logging in mox. But for each "pkg", an overriding log level can be configured. Examples of packages: smtpserver, smtpclient, queue, imapserver, spf, dkim, dmarc, junk, message, etc. Specify a pkg and an empty level to clear the configured level for a package. Valid labels: error, info, debug, trace, traceauth, tracedata. List hold rules for the delivery queue. Messages submitted to the queue that match a hold rule will be marked as on hold and not scheduled for delivery. Add hold rule for the delivery queue. Add a hold rule to mark matching newly submitted messages as on hold. Set the matching rules with the flags. Don't specify any flags to match all submitted messages. Remove hold rule for the delivery queue. Remove a hold rule by its id. List matching messages in the delivery queue. Prints the message with its ID, last and next delivery attempts, last error. Mark matching messages on hold. Messages that are on hold are not delivered until marked as off hold again, or otherwise handled by the admin. Mark matching messages off hold. Once off hold, messages can be delivered according to their current next delivery attempt. See the "queue schedule" command. Change next delivery attempt for matching messages. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Set transport for matching messages. By default, the routing rules determine how a message is delivered. The default and common case is direct delivery with SMTP. Messages can get a previously configured transport assigned to use for delivery, e.g. using submission to another mail server or with connections over a SOCKS proxy. Set TLS requirements for delivery of matching messages. Value "yes" is handled as if the RequireTLS extension was specified during submission. Value "no" is handled as if the message has a header "TLS-Required: No". This header is not added by the queue. If messages without this header are relayed through other mail servers they will apply their own default TLS policy. Value "default" is the default behaviour, currently for unverified opportunistic TLS. Fail delivery of matching messages, delivering DSNs. Failing a message is handled similar to how delivery is given up after all delivery attempts failed. The DSN (delivery status notification) message contains a line saying the message was canceled by the admin. Remove matching messages from the queue. Dangerous operation, this completely removes the message. If you want to store the message, use "queue dump" before removing. Dump a message from the queue. The message is printed to stdout and is in standard internet mail format. List matching messages in the retired queue. Prints messages with their ID and results. Print a message from the retired queue. Prints a JSON representation of the information from the retired queue. Print addresses in suppression list. Add address to suppression list for account. Remove address from suppression list for account. Check if address is present in suppression list, for any or specific account. List matching webhooks in the queue. Prints list of webhooks, their IDs and basic information. Change next delivery attempt for matching webhooks. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Fail delivery of matching webhooks. Print details of a webhook from the queue. The webhook is printed to stdout as JSON. List matching webhooks in the retired queue. Prints list of retired webhooks, their IDs and basic information. Print details of a webhook from the retired queue. The retired webhook is printed to stdout as JSON. Import a maildir into an account. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Mailbox flags, like "seen", "answered", will be imported. An optional dovecot-keywords file can specify additional flags, like Forwarded/Junk/NotJunk. Import an mbox into an account. Using mbox is not recommended, maildir is a better defined format. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Export one or all mailboxes from an account in maildir format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. Export messages from one or all mailboxes in an account in mbox format. Using mbox is not recommended. Maildir is a better format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. For mbox export, "mboxrd" is used where message lines starting with the magic "From " string are escaped by prepending a >. All ">*From " are escaped, otherwise reconstructing the original could lose a ">". Start a local SMTP/IMAP server that accepts all messages, useful when testing/developing software that sends email. Localserve starts mox with a configuration suitable for local email-related software development/testing. It listens for SMTP/Submission(s), IMAP(s) and HTTP(s), on the regular port numbers + 1000. Data is stored in the system user's configuration directory under "mox-localserve", e.g. $HOME/.config/mox-localserve/ on linux, but can be overridden with the -dir flag. If the directory does not yet exist, it is automatically initialized with configuration files, an account with email address mox@localhost and password moxmoxmox, and a newly generated self-signed TLS certificate. Incoming messages are delivered as normal, falling back to accepting and delivering to the mox account for unknown addresses. Submitted messages are added to the queue, which delivers by ignoring the destination servers, always connecting to itself instead. Recipient addresses with the following localpart suffixes are handled specially: - "temperror": fail with a temporary error code - "permerror": fail with a permanent error code - [45][0-9][0-9]: fail with the specific error code - "timeout": no response (for an hour) If the localpart begins with "mailfrom" or "rcptto", the error is returned during those commands instead of during "data". Prints help about matching commands. If multiple commands match, they are listed along with the first line of their help text. If a single command matches, its usage and full help text is printed. Creates a backup of the data directory. Backup creates consistent snapshots of the databases and message files and copies other files in the data directory. Empty directories are not copied. These files can then be stored elsewhere for long-term storage, or used to fall back to should an upgrade fail. Simply copying files in the data directory while mox is running can result in unusable database files. Message files never change (they are read-only, though can be removed) and are hard-linked so they don't consume additional space. If hardlinking fails, for example when the backup destination directory is on a different file system, a regular copy is made. Using a destination directory like "data/tmp/backup" increases the odds hardlinking succeeds: the default systemd service file specifically mounts the data directory, causing attempts to hardlink outside it to fail with an error about cross-device linking. All files in the data directory that aren't recognized (i.e. other than known database files, message files, an acme directory, the "tmp" directory, etc), are stored, but with a warning. Remove files in the destination directory before doing another backup. The backup command will not overwrite files, but print and return errors. Exit code 0 indicates the backup was successful. A clean successful backup does not print any output, but may print warnings. Use the -verbose flag for details, including timing. To restore a backup, first shut down mox, move away the old data directory and move an earlier backed up directory in its place, run "mox verifydata", possibly with the "-fix" option, and restart mox. After the restore, you may also want to run "mox bumpuidvalidity" for each account for which messages in a mailbox changed, to force IMAP clients to synchronize mailbox state. Before upgrading, to check if the upgrade will likely succeed, first make a backup, then use the new mox binary to run "mox verifydata" on the backup. This can change the backup files (e.g. upgrade database files, move away unrecognized message files), so you should make a new backup before actually upgrading. Verify the contents of a data directory, typically of a backup. Verifydata checks all database files to see if they are valid BoltDB/bstore databases. It checks that all messages in the database have a corresponding on-disk message file and there are no unrecognized files. If option -fix is specified, unrecognized message files are moved away. This may be needed after a restore, because messages enqueued or delivered in the future may get those message sequence numbers assigned and writing the message file would fail. Consistency of message/mailbox UID, UIDNEXT and UIDVALIDITY is verified as well. Because verifydata opens the database files, schema upgrades may automatically be applied. This can happen if you use a new mox release. It is useful to run "mox verifydata" with a new binary before attempting an upgrade, but only on a copy of the database files, as made with "mox backup". Before upgrading, make a new backup again since "mox verifydata" may have upgraded the database files, possibly making them potentially no longer readable by the previous version. Print licenses of mox source code and dependencies. Parses and validates the configuration files. If valid, the command exits with status 0. If not valid, all errors encountered are printed. Check the DNS records with the configuration for the domain, and print any errors/warnings. Prints annotated DNS records as zone file that should be created for the domain. The zone file can be imported into existing DNS software. You should review the DNS records, especially if your domain previously/currently has email configured. Prints an annotated empty configuration for use as domains.conf. The domains configuration file contains the domains and their configuration, and accounts and their configuration. This includes the configured email addresses. The mox admin web interface, and the mox command line interface, can make changes to this file. Mox automatically reloads this file when it changes. Like the static configuration, the example domains.conf printed by this command needs modifications to make it valid. Prints an annotated empty configuration for use as mox.conf. The static configuration file cannot be reloaded while mox is running. Mox has to be restarted for changes to the static configuration file to take effect. This configuration file needs modifications to make it valid. For example, it may contain unfinished list items. Add an account with an email address and reload the configuration. Email can be delivered to this address/account. A password has to be configured explicitly, see the setaccountpassword command. Remove an account and reload the configuration. Email addresses for this account will also be removed, and incoming email for these addresses will be rejected. All data for the account will be removed. Adds an address to an account and reloads the configuration. If address starts with a @ (i.e. a missing localpart), this is a catchall address for the domain. Remove an address and reload the configuration. Incoming email for this address will be rejected after removing an address. Adds a new domain to the configuration and reloads the configuration. The account is used for the postmaster mailboxes the domain, including as DMARC and TLS reporting. Localpart is the "username" at the domain for this account. If must be set if and only if account does not yet exist. Remove a domain from the configuration and reload the configuration. This is a dangerous operation. Incoming email delivery for this domain will be rejected. List TLS public keys for TLS client certificate authentication. If account is absent, the TLS public keys for all accounts are listed. Get a TLS public key for a fingerprint. Prints the type, name, account and address for the key, and the certificate in PEM format. Add a TLS public key to the account of the given address. The public key is read from the certificate. The optional name is a human-readable descriptive name of the key. If absent, the CommonName from the certificate is used. Remove TLS public key for fingerprint. Generate an ed25519 private key and minimal certificate for use a TLS public key and write to files starting with stem. The private key is written to $stem.$timestamp.ed25519privatekey.pkcs8.pem. The certificate is written to $stem.$timestamp.certificate.pem. The private key and certificate are also written to $stem.$timestamp.ed25519privatekey-certificate.pem. The certificate can be added to an account with "mox config account tlspubkey add". The combined file can be used with "mox sendmail". The private key is also written to standard error in raw-url-base64-encoded form, also for use with "mox sendmail". The fingerprint is written to standard error too, for reference. Show aliases (lists) for domain. Print settings and members of alias (list). Add new alias (list) with one or more addresses and public posting enabled. An alias is used for delivering incoming email to multiple recipients. If you want to add an address to an account, don't use an alias, just add the address to the account. Update alias (list) configuration. Remove alias (list). Add addresses to alias (list). Remove addresses from alias (list). Describe configuration for mox when invoked as sendmail. Prints a systemd unit service file for mox. This is the same file as generated using quickstart. If the systemd service file has changed with a newer version of mox, use this command to generate an up to date version. Ensure host private keys exist for TLS listeners with ACME. In mox.conf, each listener can have TLS configured. Long-lived private key files can be specified, which will be used when requesting ACME certificates. Configuring these private keys makes it feasible to publish DANE TLSA records for the corresponding public keys in DNS, protected with DNSSEC, allowing TLS certificate verification without depending on a list of Certificate Authorities (CAs). Previous versions of mox did not pre-generate private keys for use with ACME certificates, but would generate private keys on-demand. By explicitly configuring private keys, they will not change automatedly with new certificates, and the DNS TLSA records stay valid. This command looks for listeners in mox.conf with TLS with ACME configured. For each missing host private key (of type rsa-2048 and ecdsa-p256) a key is written to config/hostkeys/. If a certificate exists in the ACME "cache", its private key is copied. Otherwise a new private key is generated. Snippets for manually updating/editing mox.conf are printed. After running this command, and updating mox.conf, run "mox config dnsrecords" for a domain and create the TLSA DNS records it suggests to enable DANE. List available config examples, or print a specific example. Check if a newer version of mox is available. A single DNS TXT lookup to _updates.xmox.nl tells if a new version is available. If so, a changelog is fetched from https://updates.xmox.nl, and the individual entries verified with a builtin public key. The changelog is printed. Turn an ID from a Received header into a cid, for looking up in logs. A cid is essentially a connection counter initialized when mox starts. Each log line contains a cid. Received headers added by mox contain a unique ID that can be decrypted to a cid by admin of a mox instance only. Print the configuration for email clients for a domain. Sending email is typically not done on the SMTP port 25, but on submission ports 465 (with TLS) and 587 (without initial TLS, but usually added to the connection with STARTTLS). For IMAP, the port with TLS is 993 and without is 143. Without TLS/STARTTLS, passwords are sent in clear text, which should only be configured over otherwise secured connections, like a VPN. Dial the address using TLS with certificate verification using DANE. Data is copied between connection and stdin/stdout until either side closes the connection. Connect to MX server for domain using STARTTLS verified with DANE. If no destination host is specified, regular delivery logic is used to find the hosts to attempt delivery too. This involves following CNAMEs for the domain, looking up MX records, and possibly falling back to the domain name itself as host. If a destination host is specified, that is the only candidate host considered for dialing. With a list of destinations gathered, each is dialed until a successful SMTP session verified with DANE has been initialized, including EHLO and STARTTLS commands. Once connected, data is copied between connection and stdin/stdout, until either side closes the connection. This command follows the same logic as delivery attempts made from the queue, sharing most of its code. Print TLSA record for given certificate/key and parameters. Valid values: - usage: pkix-ta (0), pkix-ee (1), dane-ta (2), dane-ee (3) - selector: cert (0), spki (1) - matchtype: full (0), sha2-256 (1), sha2-512 (2) Common DANE TLSA record parameters are: dane-ee spki sha2-256, or 3 1 1, followed by a sha2-256 hash of the DER-encoded "SPKI" (subject public key info) from the certificate. An example DNS zone file entry: The first usable information from the pem file is used to compose the TLSA record. In case of selector "cert", a certificate is required. Otherwise the "subject public key info" (spki) of the first certificate or public or private key (pkcs#8, pkcs#1 or ec private key) is used. Lookup DNS name of given type. Lookup always prints whether the response was DNSSEC-protected. Examples: mox dns lookup ptr 1.1.1.1 mox dns lookup mx xmox.nl mox dns lookup txt _dmarc.xmox.nl. mox dns lookup tlsa _25._tcp.xmox.nl Generate a new ed25519 key for use with DKIM. Ed25519 keys are much smaller than RSA keys of comparable cryptographic strength. This is convenient because of maximum DNS message sizes. At the time of writing, not many mail servers appear to support ed25519 DKIM keys though, so it is recommended to sign messages with both RSA and ed25519 keys. Generate a new 2048 bit RSA private key for use with DKIM. The generated file is in PEM format, and has a comment it is generated for use with DKIM, by mox. Lookup and print the DKIM record for the selector at the domain. Print a DKIM DNS TXT record with the public key derived from the private key read from stdin. The DNS should be configured as a TXT record at $selector._domainkey.$domain. Verify the DKIM signatures in a message and print the results. The message is parsed, and the DKIM-Signature headers are validated. Validation of older messages may fail because the DNS records have been removed or changed by now, or because the signature header may have specified an expiration time that was passed. Sign a message, adding DKIM-Signature headers based on the domain in the From header. The message is parsed, the domain looked up in the configuration files, and DKIM-Signature headers generated. The message is printed with the DKIM-Signature headers prepended. Lookup dmarc policy for domain, a DNS TXT record at _dmarc.<domain>, validate and print it. Parse a DMARC report from an email message, and print its extracted details. DMARC reports are periodically mailed, if requested in the DMARC DNS record of a domain. Reports are sent by mail servers that received messages with our domain in a From header. This may or may not be legatimate email. DMARC reports contain summaries of evaluations of DMARC and DKIM/SPF, which can help understand email deliverability problems. Parse an email message and evaluate it against the DMARC policy of the domain in the From-header. mailfromaddress and helodomain are used for SPF validation. If both are empty, SPF validation is skipped. mailfromaddress should be the address used as MAIL FROM in the SMTP session. For DSN messages, that address may be empty. The helo domain was specified at the beginning of the SMTP transaction that delivered the message. These values can be found in message headers. For each reporting address in the domain's DMARC record, check if it has opted into receiving reports (if needed). A DMARC record can request reports about DMARC evaluations to be sent to an email/http address. If the organizational domains of that of the DMARC record and that of the report destination address do not match, the destination address must opt-in to receiving DMARC reports by creating a DMARC record at <dmarcdomain>._report._dmarc.<reportdestdomain>. Test if IP is in the DNS blocklist of the zone, e.g. bl.spamcop.net. If the IP is in the blocklist, an explanation is printed. This is typically a URL with more information. Check the health of the DNS blocklist represented by zone, e.g. bl.spamcop.net. The health of a DNS blocklist can be checked by querying for 127.0.0.1 and 127.0.0.2. The second must and the first must not be present. Lookup the MTASTS record and policy for the domain. MTA-STS is a mechanism for a domain to specify if it requires TLS connections for delivering email. If a domain has a valid MTA-STS DNS TXT record at _mta-sts.<domain> it signals it implements MTA-STS. A policy can then be fetched at https://mta-sts.<domain>/.well-known/mta-sts.txt. The policy specifies the mode (enforce, testing, none), which MX servers support TLS and should be used, and how long the policy can be cached. Recreate and retrain the junk filter for the account or all accounts. Useful after having made changes to the junk filter configuration, or if the implementation has changed. Sendmail is a drop-in replacement for /usr/sbin/sendmail to deliver emails sent by unix processes like cron. If invoked as "sendmail", it will act as sendmail for sending messages. Its intention is to let processes like cron send emails. Messages are submitted to an actual mail server over SMTP. The destination mail server and credentials are configured in /etc/moxsubmit.conf, see mox config describe-sendmail. The From message header is rewritten to the configured address. When the addressee appears to be a local user, because without @, the message is sent to the configured default address. If submitting an email fails, it is added to a directory moxsubmit.failures in the user's home directory. Most flags are ignored to fake compatibility with other sendmail implementations. A single recipient or the -t flag with a To-header is required. With the -t flag, Cc and Bcc headers are not handled specially, so Bcc is not removed and the addresses do not receive the email. /etc/moxsubmit.conf should be group-readable and not readable by others and this binary should be setgid that group: Check the status of IP for the policy published in DNS for the domain. IPs may be allowed to send for a domain, or disallowed, and several shades in between. If not allowed, an explanation may be provided by the policy. If so, the explanation is printed. The SPF mechanism that matched (if any) is also printed. Lookup the SPF record for the domain and print it. Parse the record as SPF record. If valid, nothing is printed. Lookup the TLSRPT record for the domain. A TLSRPT record typically contains an email address where reports about TLS connectivity should be sent. Mail servers attempting delivery to our domain should attempt to use TLS. TLSRPT lets them report how many connection successfully used TLS, and how what kind of errors occurred otherwise. Parse and print the TLSRPT in the message. The report is printed in formatted JSON. Prints this mox version. Lists available methods, prints request/response parameters for method, or calls a method with a request read from standard input. List available examples, or print a specific example. Change the IMAP UID validity of the mailbox, causing IMAP clients to refetch messages. This can be useful after manually repairing metadata about the account/mailbox. Opens account database file directly. Ensure mox does not have the account open, or is not running. Reassign UIDs in one mailbox or all mailboxes in an account and bump UID validity, causing IMAP clients to refetch messages. Opens account database file directly. Ensure mox does not have the account open, or is not running. Fix inconsistent UIDVALIDITY and UIDNEXT in messages/mailboxes/account. The next UID to use for a message in a mailbox should always be higher than any existing message UID in the mailbox. If it is not, the mailbox UIDNEXT is updated. Each mailbox has a UIDVALIDITY sequence number, which should always be lower than the per-account next UIDVALIDITY to use. If it is not, the account next UIDVALIDITY is updated. Opens account database file directly. Ensure mox does not have the account open, or is not running. Ensure message sizes in the database matching the sum of the message prefix length and on-disk file size. Messages with an inconsistent size are also parsed again. If an inconsistency is found, you should probably also run "mox bumpuidvalidity" on the mailboxes or entire account to force IMAP clients to refetch messages. Parse all messages in the account or all accounts again. Can be useful after upgrading mox with improved message parsing. Messages are parsed in batches, so other access to the mailboxes/messages are not blocked while reparsing all messages. Ensure messages in the database have a pre-parsed MIME form in the database. Recalculate message counts for all mailboxes in the account, and total message size for quota. When a message is added to/removed from a mailbox, or when message flags change, the total, unread, unseen and deleted messages are accounted, the total size of the mailbox, and the total message size for the account. In case of a bug in this accounting, the numbers could become incorrect. This command will find, fix and print them. Parse message, print JSON representation. Reassign message threads. For all accounts, or optionally only the specified account. Threading for all messages in an account is first reset, and new base subject and normalized message-id saved with the message. Then all messages are evaluated and matched against their parents/ancestors. Messages are matched based on the References header, with a fall-back to an In-Reply-To header, and if neither is present/valid, based only on base subject. A References header typically points to multiple previous messages in a hierarchy. From oldest ancestor to most recent parent. An In-Reply-To header would have only a message-id of the parent message. A message is only linked to a parent/ancestor if their base subject is the same. This ensures unrelated replies, with a new subject, are placed in their own thread. The base subject is lower cased, has whitespace collapsed to a single space, and some components removed: leading "Re:", "Fwd:", "Fw:", or bracketed tag (that mailing lists often add, e.g. "[listname]"), trailing "(fwd)", or enclosing "[fwd: ...]". Messages are linked to all their ancestors. If an intermediate parent/ancestor message is deleted in the future, the message can still be linked to the earlier ancestors. If the direct parent already wasn't available while matching, this is stored as the message having a "missing link" to its stored ancestors.
Package users implements common web user workflows. Most of the provided functions are regular net/http handler functions. The following functionality is provided: Special emphasis is placed on reducing the risk of someone hijacking user accounts. This is achieved by enforcing a certain user structure and following certain procedures: If your application does not follow these principles, you may not be able to use this package as is. However, the code may serve as a starting point if you apply its principles to your own use case. Note also the following: The users.Main() function registers all handlers and starts an HTTP server: Any other handlers can be added to the http.DefaultServeMux before calling users.Main(). Alternatively, you can start your own HTTP server. See the implementation of users.Main() for how to add the package's handlers. See the package example for a most basic way to use the package. In addition, the global Config struct contains all the variables that need to be adjusted for your specific application. It provides sensible defaults out of the box which you can see in its documentation. The fields are as follows: The following fields control how templates are handled: If your application supports internationalization, you can set the Internationalization field to true. If set to true, this package's code checks for the "lang" cookie and appends its value to the HTMLTemplateDir and MailTemplateDir directories to search for template files. Cookie values must be of the format "xx" or "xx-XX" (e.g. "en-US"). If they don't have this format or if the corresponding subdirectory does not exist, the search falls back to the HTMLTemplateDir and MailTemplateDir directories. It is up to the application to set the "lang" cookie. Emails are sent if the SendEmails field is set to true. You can provide your own email function by implementing the SendEmail field. Alternatively, the net/smtp package is used to send emails. The following fields need to specified (fields starting with "SMTP" are only needed when you don't provide your own SendEmail implementation): A number of functions serve as the interface to your database: Anyone using this package must define a type which implements this package's User interface. A user is in one of three possible states: Users have an ID which must be unique (e.g. generated by CUID() in the package github.com/rivo/sessions). But this package may access users based on their unique email address, their verification ID, or their password reset token. You must implement the Config.NewUser function. There are basic HTML templates (in the "html" subdirectory) and email templates (in the "mail" subdirectory). All HTML templates starting with "error_" are templates that will generate error messages which are then embedded in another HTML template. When starting to work with this package, you will want to make a copy of these two subdirectories and modify the templates to your needs. This package implements some functions to render templates which are also public so you may use them in other places, too. The function RenderPage() takes a template filename and a data object (to which the template will be bound), renders the template, and sends it to the browser. Instead of calling this function, however, RenderPageBasic() is used more often. It calls RenderPage() but populates the data object with the Config object and the User object (if one was provided). If an error message needs to be shown to the user, RenderPageError() can be used. This actually involves two templates, one to generate only the error message (these template files start with "error_"), and the other to generate the HTML file which shows the error message. Config and User will also be bound to the latter as well as any data sent to the error message template. There is another function for errors, RenderProgramError(), which is used to show program errors. These are unexpected errors, for example database connection issues, and should always be followed up on. While the user usually only sees a basic error message, more detailed information about the error is sent to the logger for further inspection. The SendMail() function renders mail templates (based on text/template) to send them to the user's email address. When writing your own templates, it is helpful to make a copy of the existing example templates and modify them to your needs. All templates include a header and a footer file. If you include more files, you will need to set the Config.HTMLTemplateIncludes and Config.MailTemplateIncludes fields accordingly.
ClickHouse-Bulk Simple Yandex ClickHouse (https://clickhouse.yandex/) insert collector. It collect requests and send to ClickHouse servers. - Group n requests and send to any of ClickHouse server - Sending collected data by interval - Tested with VALUES, TabSeparated formats - Supports many servers to send - Supports query in query parameters and in body - Supports other query parameters like username, password, database - - Supports basic authentication For example: INSERT INTO table3 (c1, c2, c3) VALUES ('v1', 'v2', 'v3') INSERT INTO table3 (c1, c2, c3) VALUES ('v4', 'v5', 'v6') sends as INSERT INTO table3 (c1, c2, c3) VALUES ('v1', 'v2', 'v3')('v4', 'v5', 'v6') - -config - config file (json); default _config.json_ `./clickhouse-bulk` and send queries to :8124 For better performance words FORMAT and VALUES must be uppercase.
Package ovirtclient provides a human-friendly Go client for the oVirt Engine. It provides an abstraction layer for the oVirt API, as well as a mocking facility for testing purposes. This documentation contains two parts. This introduction explains setting up the client with the credentials. The API doc contains the individual API calls. When reading the API doc, start with the Client interface: it contains all components of the API. The individual API's, their documentation and examples are located in subinterfaces, such as DiskClient. There are several ways to create a client instance. The most basic way is to use the New() function as follows: The mock client simulates the oVirt engine behavior in-memory without needing an actual running engine. This is a good way to provide a testing facility. It can be created using the NewMock method: That's it! However, to make it really useful, you will need the test helper which can set up test fixtures. The test helper can work in two ways: Either it sets up test fixtures in the mock client, or it sets up a live connection and identifies a usable storage domain, cluster, etc. for testing purposes. The ovirtclient.NewMockTestHelper() function can be used to create a test helper with a mock client in the backend: The easiest way to set up the test helper for a live connection is by using environment variables. To do that, you can use the ovirtclient.NewLiveTestHelperFromEnv() function: This function will inspect environment variables to determine if a connection to a live oVirt engine can be established. The following environment variables are supported: URL of the oVirt engine API. Mandatory. The username for the oVirt engine. Mandatory. The password for the oVirt engine. Mandatory. A file containing the CA certificate in PEM format. Provide the CA certificate in PEM format directly. Disable certificate verification if set. Not recommended. The cluster to use for testing. Will be automatically chosen if not provided. ID of the blank template. Will be automatically chosen if not provided. Storage domain to use for testing. Will be automatically chosen if not provided. VNIC profile to use for testing. Will be automatically chosen if not provided. You can also create the test helper manually: This library provides extensive logging. Each API interaction is logged on the debug level, and other messages are added on other levels. In order to provide logging this library uses the go-ovirt-client-log (https://github.com/oVirt/go-ovirt-client-log) interface definition. As long as your logger implements this interface, you will be able to receive log messages. The logging library also provides a few built-in loggers. For example, you can log via the default Go log interface: Or, you can also log in tests: You can also disable logging: Finally, we also provide an adapter library for klog here: https://github.com/oVirt/go-ovirt-client-log-klog Modern-day oVirt engines run secured with TLS. This means that the client needs a way to verify the certificate the server is presenting. This is controlled by the tls parameter of the New() function. You can implement your own source by implementing the TLSProvider interface, but the package also includes a ready-to-use provider. Create the provider using the TLS() function: This provider has several functions. The easiest to set up is using the system trust root for certificates. However, this won't work own Windows: Now you need to add your oVirt engine certificate to your system trust root. If you don't want to, or can't add the certificate to the system trust root, you can also directly provide it to the client. Finally, you can also disable certificate verification. Do we need to say that this is a very, very bad idea? The configured tls variable can then be passed to the New() function to create an oVirt client. This library attempts to retry API calls that can be retried if possible. Each function has a sensible retry policy. However, you may want to customize the retries by passing one or more retry flags. The following retry flags are supported: This strategy will stop retries when the context parameter is canceled. This strategy adds a wait time after each time, which is increased by the given factor on each try. The default is a backoff with a factor of 2. This strategy will cancel retries if the error in question is a permanent error. This is enabled by default. This strategy will abort retries if a maximum number of tries is reached. On complex calls the retries are counted per underlying API call. This strategy will abort retries if a certain time has been elapsed for the higher level call. This strategy will abort retries if a certain underlying API call takes longer than the specified duration.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set LogLevel to control logging verbosity and LogPath to specify log file path. In order to enable logging for the driver, you need to write below code in your application Declaring elog variable and calling elog.Initialize() function is mandatory else application would fail with error "runtime error: invalid memory address or nil pointer dereference". You can configure LogLevel and LogPath (i.e. log file directory) as per your requirement. You may skip initializing LogLevel and LogPath values. In such case, it would take default values. Default value for LogLevel is DEBUG while for LogPath is same directory as your application. Other valid values for 'LogLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL" The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): The above example opens a database handle on NPS server 'vmnps-dw10.svl.ibm.com'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
Package rdb implements parsing and encoding of the Redis RDB file format.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package main is the UBNT edgeos-dnsmasq-blacklist dnsmasq DNS Blacklisting and Redirection. View the software license here (https://github.com/britannic/blacklist/blob/master/LICENSE.txt)Latest versionVersion (https://github.com/britannic/blacklist)Go documentationGoDoc (https://godoc.org/github.com/britannic/blacklist)Build status for this versionBuild Status (https://travis-ci.org/britannic/blacklist)Test coverage status for this versionCoverage Status (https://coveralls.io/github/britannic/blacklist?branch=master)Quality of Go code for this versionGo Report Card (https://goreportcard.com/report/github.com/britannic/blacklist) Follow the conversation @ community.ubnt.com (https://community.ubnt.com/t5/EdgeRouter/DNS-Adblocking-amp-Blacklisting-dnsmasq-Configuration/td-p/2215008/jump-to/first-unread-message "Follow the conversation about this software in the EdgeRouter forum (https://community.ubnt.com/t5/EdgeRouter/)") Please show your thanks by donating to the project using Securely send and receive cash without fees using Square CashSquare Cash (https://cash.me/$HelmRockSecurity/) or PayPal (https://www.paypal.me/helmrocksecurity/) Donate (https://cash.me/$HelmRockSecurity/5 "Give $5 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/10 "Give $10 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/15 "Give $15 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/20 "Give $20 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/25 "Give $25 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/50 "Give $50 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/100 "Give $100 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/ "Choose your own donation amount using Square Cash (free money transfer)") Donate (https://paypal.me/helmrocksecurity/5 "Give $5 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/10 "Give $10 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/15 "Give $15 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/20 "Give $20 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/25 "Give $25 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/50 "Give $50 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/100 "Give $100 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/ "Choose your own donation amount using PayPal (PayPal money transfer)") We greatly appreciate any and all donations - Thank you! Funds go to maintaining development servers and networks. Note: This is 3rd party software and isn't supported or endorsed by Ubiquiti Networks® • Overview (#overview) • Donate (#donations-and-sponsorship) • Copyright (#copyright) • Licenses (#licenses) • Latest Version (#latest-version) • Change Log (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) • Features (#features) • Compatibility (#compatibility) • Installation (#installation) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) • Upgrade (#upgrade) • Removal (#removal) • Frequently Asked Questions (#frequently-asked-questions) • Can I donate to project? (#donations-and-sponsorship) • Does the install backup my blacklist configuration before deleting it? (#does-the-install-backup-my-blacklist-configuration-before-deleting-it) • Does update-dnsmasq run automatically? (#does-update-dnsmasq-run-automatically) • How do I add or delete sources? (#how-do-i-add-or-delete-sources) • How do I back up my blacklist configuration and restore it later? (#how-do-i-back-up-my-blacklist-configuration-and-restore-it-later) • How do I configure dnsmasq? (#how-do-i-configure-dnsmasq) • How do I configure local file sources instead of internet based ones? (#how-do-i-configure-local-file-sources-instead-of-internet-based-ones) • How do I disable/enable dnsmasq blacklisting? (#how-do-i-disableenable-dnsmasq-blacklisting) • How do I exclude or include a host or a domain? (#how-do-i-exclude-or-include-a-host-or-a-domain) • How do I globally exclude or include hosts or a domains? (#how-do-i-globally-exclude-or-include-hosts-or-a-domains) • How do I use the command line switches? (#how-do-i-use-the-command-line-switches) • How do can keep my USG configuration after an upgrade, provision or reboot? (#how-do-can-keep-my-usg-configuration-after-an-upgrade-provision-or-reboot) • How does whitelisting work? (#how-does-whitelisting-work) • What is the difference between blocking domains and hosts? (#what-is-the-difference-between-blocking-domains-and-hosts) • Which blacklist sources are installed by default? (#which-blacklist-sources-are-installed-by-default) EdgeMax dnsmasq DNS blacklisting and redirection is inspired by the users at EdgeMAX Community (https://community.ubnt.com/t5/EdgeMAX/bd-p/EdgeMAX/) [Top] (#contents) • Copyright © Visit Helm Rock Consulting at https://www.helmrock.com/2019 Helm Rock Consulting (https://www.helmrock.com/) [Top] (#contents) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the FreeBSD Project. [Top] (#contents) Latest versionLatest (https://github.com/britannic/blacklist/releases/latest) Release v1.1.6.2 (April 24, 2018) • Code refactor • Global whitelist and blacklist configuration files now have their own prefix: "roots" i.e. [Top] (#contents) • See changelog (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) for details. [Top] (#contents) • Adds DNS blacklisting integration to the EdgeRouter configuration • Generates configuration files used directly by dnsmasq to redirect dns lookups • Integrated with the EdgeMax OS CLI • Any FQDN in the blacklist will force dnsmasq to return the configured dns redirect IP address [Top] (#contents) • edgeos-dnsmasq-blacklist has been tested on the EdgeRouter ERLite-3, ERPoe-5, ER-X and UniFi Security Gateway USG-3 routers • EdgeMAX versions: v1.9.7+hotfix.4-v1.10.1, UniFi: v4.4.12-v4.4.18 • integration could be adapted to work on VyOS and Vyatta derived ports, since EdgeOS is a fork and port of Vyatta 6.3 [Top] (#contents) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) - works for all routers • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) - best for disk space constrained routers [Top] (#contents) apt-get Installation - ERLite-3, ERPoe-5, ER-X, ER-X-SFP & UniFi-Gateway-3 • Add the blacklist debian package repository using the router's CLI shell • Add the GPG signing key • Update the system repositorities and install edgeos-dnsmasq-blacklist [Top] (#contents) dpkg Installation - best for disk space constrained routers EdgeRouter ERLite-3, ERPoe-5 & UniFi-Gateway-3 [Top] (#contents) EdgeRouter ER-X & ER-X-SFP • Ensure the router has enough space, by removing unnecessary files • Now download and install the edgeos-dnsmasq-blacklist package [Top] (#contents) • If the repository is set up and you are using apt-get: • Note, if you are using dpkg, it cannot upgrade packages, so follow these instructions (#dpkg-installation---best-for-disk-space-constrained-routers) and the previous package version will be automatically removed before the new package version is installed [Top] (#contents) EdgeMAX - All Platforms [Top] (#contents) How do I disable/enable dnsmasq blacklisting? • Use these CLI configure commands: • Disable: • Enable: [Top] (#contents) Does the install backup my blacklist configuration before deleting it? • If a blacklist configuration already exists, the install routine will automatically back it up to /config/user-data/blacklist.$(date +'%FT%H%M%S').cmds [Top] (#contents) How do I back up my blacklist configuration and restore it later? • use the following commands (make a note of the file name): • After installing the latest version, you can merge your backed up configuration: • If you prefer to delete the default configuration and restore your previous configuration, run these commands: [Top] (#contents) Which blacklist sources are installed by default? • You can use this command in the CLI shell to view the current sources after installation or view the log and see previous downloads: [Top] (#contents) How do I configure local file sources instead of internet based ones? • Use these commands to configure a local file source • File contents example for /config/user-data/blist.hosts.src: [Top] (#contents) How do can keep my USG configuration after an upgrade, provision or reboot? • Follow these instructions (https://britannic.github.io/install-edgeos-packages/) on how to automatically install edgeos-dnsmasq-blacklist • Create a config.gateway.json file following these instructions (https://help.ubnt.com/hc/en-us/articles/215458888-UniFi-How-to-further-customize-USG-configuration-with-config-gateway-json) • Here's a sample config.gateway.json (https://raw.githubusercontent.com/britannic/blacklist/master/config.gateway.json) [Top] (#contents) How do I add or delete sources? • Using the CLI configure command, to delete domains and hosts sources: • To add a source, first check it can serve a text list and also note the prefix (if any) before the hosts or domains, e.g. http://www.malwaredomainlist.com/ (http://www.malwaredomainlist.com/) has this format: • So the prefix is "127.0.0.1 " • Here's how to creating the source in the CLI: [Top] (#contents) How do I globally exclude or include hosts or a domains? • Use these example commands to globally include or exclude blacklisted entries: [Top] (#contents) How do I exclude or include a host or a domain? • Use these example commands to include or exclude blacklisted entries: [Top] (#contents) How does whitelisting work? *dnsmasq will whitelist any entries in the configuration file domains and hosts (servers) with a hash in place of an IP address (the "#" force dnsmasq to forward the DNS request to the router's configured nameservers) • i.e. servers (hosts) • i.e. domains [Top] (#contents) Does update-dnsmasq run automatically? • Yes, a scheduled task is created and run daily at midnight with a random start delay is used ensure other routers in the same time zone won't overload the source servers. • The random start delay window is configured in seconds using this command - this example sets the start delay between 1-10800 seconds (0-3 hours): • It can be reconfigured using these CLI configuration commands: • For example, to change the execution interval to every 6 hours, use this command: • In daily use, no additional interaction with update-dnsmasq is required. By default, cron will run update-dnsmasq at midnight each day to download the blacklist sources and update the dnsmasq configuration files in /etc/dnsmasq.d. dnsmasq will automatically be reloaded after the configuration file update is completed. [Top] (#contents) How do I use the command line switches? • update-dnsmasq has the following commandline switches available: [Top] (#contents) How do I configure dnsmasq? • dnsmasq may need to be configured to ensure blacklisting works correctly • Here is an example using the EdgeOS configuration shell [Top] (#contents) What is the difference between blocking domains and hosts? • The difference lies in the order of update-dnsmasq's processing algorithm. Domains are processed first and take precedence over hosts, so that a blacklisted domain will force update-dnsmasq's source parser to exclude subsequent hosts from the same domain. This reduces dnsmasq's list of lookups, since it will automatically redirect hosts for a blacklisted domain. [Top] (#contents) blacklist
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package v4l2 exposes the V4L2 (Video4Linux version 2) API to Golang. Example: That example opens up the V4L2 device at "/dev/video0" on the file system, and displays some basic information about the device. (Of course, we could have opened one of the other V4L2 devices. Such as: v4l2.Video1, v4l2.Video2, ..., or v4l2.Video63.) Continuing this same example: Here we have iterating through the formats that are supported for this device. Extending that last code block, to fill in that "//@TODO", we can iterate through the frame sizes, for each format, we the following: One thing to be cognisant of, is that on Linux systems, V4L2 (Video4Linux version 2) will create a (special) file in the /dev/ directory for every V4L2 device. These devices will name path names such as: • /dev/video0 • /dev/video1 • /dev/video2 … • /dev/video63 You would call Open using these names directly. For example: However, this package also provides constants that can be used. For example:
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package cookiemonster provides methods for parsing Netscape format cookie files into slices of http.Cookie
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package blog4go provide an efficient and easy-to-use writers library for logging into files, console or sockets. Writers suports formatting string filtering and calling user defined hook in asynchronous mode.
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package protolog implements a simple file format for a sequence of blobs with ability to store a message type with each blob as well as a checksum. It is intended for logging protobuf messages. Unlike other formats, it tries to be simple, so writing a reader/writer in other languages is trivial. This design is a modified form of Eric Lesh's recordio Go implementation (github.com/eclesh/recordio). It uses fixed size headers with support for a uint16 ID of the message type and a CRC-32C checksum. Each blob must be less than 4 GiB (2^32 bytes).
cookiefile is a library that provides functions for reading netscape-formatted cookie files. For more information about the file format, see https://curl.haxx.se/docs/http-cookies.html
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package osrelease is a slightly pedantic implementation of freedesktop.org's specification of the "os-release" file format. Please see https://www.freedesktop.org/software/systemd/man/os-release.html for details about the "os-release" file format and its various OS identification variables. Fetches os-release(5) variables with their values (when systemd is present), then prints them.
Package config is an encoding-agnostic configuration abstraction. It supports merging multiple configuration files, expanding environment variables, and a variety of other small niceties. It currently supports YAML, but may be extended in the future to support more restrictive encodings like JSON or TOML. It's often convenient to separate configuration into multiple files; for example, an application may want to first load some universally-applicable configuration and then merge in some environment-specific overrides. This package supports this pattern in a variety of ways, all of which use the same merge logic. Simple types (numbers, strings, dates, and anything else YAML would consider a scalar) are merged by replacing lower-priority values with higher-priority overrides. For example, consider this merge of base.yaml and override.yaml: Slices, arrays, and anything else YAML would consider a sequence are also replaced. Again merging base.yaml and override.yaml: Maps are recursively deep-merged, handling scalars and sequences as described above. Consider a merge between a more complex set of YAML files: In all cases, explicit nils (represented in YAML with a tilde) override any pre-existing configuration. For example, By default, the NewYAML constructor enables gopkg.in/yaml.v2's strict unmarshalling mode. This prevents a variety of common programmer errors, especially when deep-merging loosely-typed YAML files. In strict mode, providers throw errors if keys are duplicated in the same configuration source, all keys aren't used when populating a struct, or a merge encounters incompatible data types. This behavior can be disabled with the Permissive option. To maintain backward compatibility, all other constructors default to permissive unmarshalling. YAML allows strings to appear quoted or unquoted, so these two lines are identical: However, the YAML specification special-cases some unquoted strings. Most obviously, true and false are interpreted as Booleans (unless quoted). Less obviously, yes, no, on, off, and many variants of these words are also treated as Booleans (see http://yaml.org/type/bool.html for the complete specification). Correctly deep-merging sources requires this package to unmarshal and then remarshal all YAML, which implicitly converts these special-cased unquoted strings to their canonical representation. For example, Quoting special-cased strings prevents this surprising behavior. Unfortunately, this package was released with a variety of bugs and an overly large API. The internals of the configuration provider have been completely reworked and all known bugs have been addressed, but many duplicative exported functions were retained to preserve backward compatibility. New users should rely on the NewYAML constructor. In particular, avoid NewValue - it's unnecessary, complex, and may panic. Deprecated functions are documented in the format expected by the staticcheck linter, available at https://staticcheck.io/.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json //This package implements three Handlers that add debugging information to the //context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's //an example that adds the source file and line number of each logging call to //the context. // // h := log.CallerFileHandler(log.StdoutHandler) // log.Root().SetHandler(h) // ... // log.Error("open file", "err", err) // //This will output a line that looks like: // // lvl=eror t=2014-05-02T16:07:23-0700 msg="open file" err="file not found" caller=data.go:42 // //Here's an example that logs the call stack rather than just the call site. // // h := log.CallerStackHandler("%+v", log.StdoutHandler) // log.Root().SetHandler(h) // ... // log.Error("open file", "err", err) // //This will output a line that looks like: // // lvl=eror t=2014-05-02T16:07:23-0700 msg="open file" err="file not found" stack="[pkg/data.go:42 pkg/cmd/main.go]" // //The "%+v" format instructs the handler to include the path of the source file //relative to the compile time GOPATH. The github.com/go-stack/stack package //documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package for parsing a HAproxy config file into a structured form. You might also use it to convert the file into a general JSON format. Limitation: There is currently no way of writing the structure in HAproxy config file format.