Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables SNOWFLAKE_HOME(connections.toml file directory) SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package exif parses raw EXIF information given a block of raw EXIF data. It can also construct new EXIF information, and provides tools for doing so. This package is not involved with the parsing of particular file-formats. The EXIF data must first be extracted and then provided to us. Conversely, when constructing new EXIF data, the caller is responsible for packaging this in whichever format they require.
Package exif parses raw EXIF information given a block of raw EXIF data. It can also construct new EXIF information, and provides tools for doing so. This package is not involved with the parsing of particular file-formats. The EXIF data must first be extracted and then provided to us. Conversely, when constructing new EXIF data, the caller is responsible for packaging this in whichever format they require.
bíogo is a bioinformatics library for the Go language. It is a work in progress. bíogo stems from the need to address the size and structure of modern genomic and metagenomic data sets. These properties enforce requirements on the libraries and languages used for analysis: In addition to the computational burden of massive data set sizes in modern genomics there is an increasing need for complex pipelines to resolve questions in tightening problem space and also a developing need to be able to develop new algorithms to allow novel approaches to interesting questions. These issues suggest the need for a simplicity in syntax to facilitate: Related to the second issue is the reluctance of some researchers to release code because of quality concerns http://www.nature.com/news/2010/101013/full/467753a.html The issue of code release is the first of the principles formalised in the Science Code Manifesto http://sciencecodemanifesto.org/ A language with a simple, yet expressive, syntax should facilitate development of higher quality code and thus help reduce this barrier to research code release. It seems that nearly every language has it own bioinformatics library, some of which are very mature, for example BioPerl and BioPython. Why add another one? The different libraries excel in different fields, acting as scripting glue for applications in a pipeline (much of [1-3]) and interacting with external hosts [1, 2, 4, 5], wrapping lower level high performance languages with more user friendly syntax [1-4] or providing bioinformatics functions for high performance languages [5, 6]. The intended niche for bíogo lies somewhere between the scripting libraries and high performance language libraries in being easy to use for both small and large projects while having reasonable performance with computationally intensive tasks. The intent is to reduce the level of investment required to develop new research software for computationally intensive tasks. The bíogo library structure is influenced both by the structure of BioPerl and the Go core libraries. The coding style should be aligned with normal Go idioms as represented in the Go core libraries. Position numbering in the bíogo library conforms to the zero-based indexing of Go and range indexing conforms to Go's half-open zero-based slice indexing. This is at odds with the 'normal' inclusive indexing used by molecular biologists. This choice was made to avoid inconsistent indexing spaces being used — one-based inclusive for bíogo functions and methods and zero-based for native Go slices and arrays — and so avoid errors that this would otherwise facilitate. Note that the GFF package does allow, and defaults to, one-based inclusive indexing in its input and output of GFF files. Quality scores are supported for all sequence types, including protein. Phred and Solexa scoring systems are able to be read from files, however internal representation of quality scores is with Phred, so there will be precision loss in conversion. A Solexa quality score type is provided for use where this will be a problem. Copyright ©2011-2012 The bíogo Authors except where otherwise noted. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package conv provides fast and intuitive conversions across Go types. All conversion functions accept any type of value for conversion, if unable to find a reasonable conversion path they will return the target types zero value and an error. Numeric conversion from other numeric values of an identical type will be returned without modification. Numeric conversions deviate slightly from Go when dealing with under/over flow. When performing a conversion operation that would overflow, we instead assign the maximum value for the target type. Similarly, conversions that would underflow are assigned the minimun value for that type, meaning unsigned integers are given zero values instead of spilling into large positive integers. In short, panics should not occur within this library under any circumstance. This obviously excludes any oddities that may surface when the runtime is not in a healthy state, i.e. uderlying system instability, memory exhaustion. If you are able to create a reproducible panic please file a bug report.
Command gencodec generates marshaling methods for struct types. When gencodec is invoked on a directory and type name, it creates a Go source file containing JSON, YAML and TOML marshaling methods for the type. The generated methods add features which the standard json package cannot offer. The gencodec:"required" tag can be used to generate a presence check for the field. The generated unmarshaling method returns an error if a required field is missing. Other struct tags are carried over as is. The "json", "yaml", "toml" tags can be used to rename a field when marshaling. Example: An invocation of gencodec can specify an additional 'field override' struct from which marshaling type replacements are taken. If the override struct contains a field whose name matches the original type, the generated marshaling methods will use the overridden type and convert to and from the original field type. If the override struct contains a field F of type T, which does not exist in the original type, and the original type has a method named F with no arguments and return type assignable to T, the method is called by Marshal*. If there is a matching method F but the return type or arguments are unsuitable, an error is raised. In this example, the specialString type implements json.Unmarshaler to enforce additional parsing rules. When json.Unmarshal is used with type foo, the specialString unmarshaler will be used to parse the value of SpecialField. The result of foo.Func() is added to the result on marshaling under the key `id`. If the input on unmarshal contains a key `id` this field is ignored. Field types in the override struct must be trivially convertible to the original field type. gencodec's definition of 'convertible' is less restrictive than the usual rules defined in the Go language specification. The following conversions are supported: If the fields are directly assignable, no conversion is emitted. If the fields are convertible according to Go language rules, a simple conversion is emitted. Example input code: The generated code will contain: If the fields are of map or slice type and the element (and key) types are convertible, a simple loop is emitted. Example input code: The generated code is similar to this snippet: Conversions between slices and arrays are supported. Example input code: The generated code is similar to this snippet:
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package config provides typesafe, cloud native configuration binding from environment variables or files to structs. Configuration can be done in as little as two lines: A field's type determines what https://pkg.go.dev/strconv function is called. All string conversion rules are as defined in the https://pkg.go.dev/strconv package. time.Duration follows the same parsing rules as https://pkg.go.dev/time#ParseDuration *net.URL follows the same parsing rules as https://pkg.go.dev/net/url#URL.Parse NOTE: `*net.URL` fields on the struct **must** be a pointer If chaining multiple data sources, data sets are merged. Later values override previous values. Unset values remain intact or as their native zero value: https://tour.golang.org/basics/12. Nested structs/subconfigs are delimited with double underscore. Env vars map to struct fields case insensitively. NOTE: Also true when using struct tags.
Package bigqueue provides embedded, fast and persistent queue written in pure Go using memory mapped file. bigqueue is currently not thread safe. To use bigqueue in parallel context, a Write lock needs to be acquired (even for Read APIs). Create or open a bigqueue: bigqueue persists the data of the queue in multiple Arenas. Each Arena is a file on disk that is mapped into memory (RAM) using mmap syscall. Default size of each Arena is set to 128MB. It is possible to create a bigqueue with custom Arena size: bigqueue also allows setting up the maximum possible memory that it can use. By default, the maximum memory is set to [3 x Arena Size]. In this case, bigqueue will never allocate more memory than `4KB*10=40KB`. This memory is above and beyond the memory used in buffers for copying data. Bigqueue allows to set periodic flush based on either elapsed time or number of mutate (enqueue/dequeue) operations. Flush syncs the in memory changes of all memory mapped files with disk. *This is a best effort flush*. Elapsed time and number of mutate operations are only checked upon an enqueue/dequeue. This is how we can set these options: In this case, a flush is done after every two mutate operations. In this case, a flush is done after one minute elapses and an Enqueue/Dequeue is called. Write to bigqueue: bigqueue allows writing string data directly, avoiding conversion to `[]byte`: Read from bigqueue: we can also read string data from bigqueue: Check whether bigqueue has non zero elements: bigqueue allows reading data from bigqueue using consumers similar to Kafka. This allows multiple consumers from reading data at different offsets (not in thread safe manner yet). The offsets of each consumer are persisted on disk and can be retrieved by creating a consumer with the same name. Data will be read from the same offset where it was left off. We can create a new consumer as follows. The offsets of a new consumer are set at the start of the queue wherever the first non-deleted element is. We can also copy an existing consumer. This will create a consumer that will have the same offsets into the queue as that of the existing consumer. Now, read operations can be performed on the consumer:
Package goics is a toolkit for encoding and decoding ics/Ical/icalendar files. This is a work in progress project, that will try to incorporate as many exceptions and variants of the format. This is a toolkit because user has to define the way it needs the data. The idea is builded with something similar to the consumer/provider pattern. We want to decode a stream of vevents from a .ics file into a custom type Events Our custom type, will need to implement ICalConsumer interface, where, the type will pick up data from the format. The decoding process will be somthing like this: I have choosed this model, because, this format is a pain and also I don't like a lot the reflect package. For encoding objects to iCal format, something similar has to be done: The object emitting elements for the encoder, will have to implement the ICalEmiter, returning a Component structure to be encoded. This also had been done, because every package could require to encode vals and keys their way. Just for encoding time, I found more than three types of lines. The Componenter, is an interface that every Component that can be encoded to ical implements. Properties, had to be stored as strings, the conversion from origin type to string format, must be done, on the emmiter. There are some helpers for date conversion and on the future I will add more, for encoding params on the string, and also for handling lists and recurrent events. A simple example not functional used for testing:
Package flagsfiller makes Go's flag package pleasant to use by mapping the fields of a given struct into flags in a FlagSet. A FlagSetFiller is created with the New constructor, passing it any desired FillerOptions. With that, call Fill, passing it a flag.FlatSet, such as flag.CommandLine, and your struct to be mapped. Even a simple struct with no special changes can be used, such as: After calling Parse on the flag.FlagSet, the corresponding fields of the mapped struct will be populated with values passed from the command-line. For an even quicker start, flagsfiller provides a convenience Parse function that does the same as the snippet above in one call: By default, the flags are named by taking the field name and performing a word-wise conversion to kebab-case. For example the field named "MyMultiWordField" becomes the flag named "my-multi-word-field". The naming strategy can be changed by passing a custom Renamer using the WithFieldRenamer option in the constructor. Additional aliases, such as short names, can be declared with the `aliases` tag as a comma-separated list: FlagSetFiller supports nested structs and computes the flag names by prefixing the field name of the struct to the names of the fields it contains. For example, the following maps to the flags named remote-host, remote-auth-username, and remote-auth-password: To declare a flag's usage add a `usage:""` tag to the field, such as: Since flag.UnquoteUsage normally uses back quotes to locate the argument placeholder name but struct tags also use back quotes, flagsfiller will instead use [square brackets] to define the placeholder name, such as: results in the rendered output: To declare the default value of a flag, you can either set a field's value before passing the struct to process, such as: or add a `default:""` tag to the field. Be sure to provide a valid string that can be converted into the field's type. For example, FlagSetFiller also includes support for []string fields. Repetition of the argument appends to the slice and/or an argument value can contain a comma or newline separated list of values. For example: results in a three element slice. The default tag's value is provided as a comma-separated list, such as FlagSetFiller also includes support for map[string]string fields. Each argument entry is a key=value and/or repetition of the arguments adds to the map or multiple entries can be comma or newline separated in a single argument value. For example: results in a map with three entries. The default tag's value is provided a comma-separate list of key=value entries, such as FlagSetFiller also supports following field types: - net.IP: format used by net.ParseIP() - net.IPNet: format used by net.ParseCIDR() - net.HardwareAddr (MAC addr): format used by net.ParseMAC() - time.Time: format is the layout string used by time.Parse(), default layout is time.DateTime, could be overriden by field tag "layout" - slog.Level: parsed as specified by https://pkg.go.dev/log/slog#Level.UnmarshalText, such as "info" To activate the setting of flag values from environment variables, pass the WithEnv option to flagsfiller.New or flagsfiller.Parse. That option takes a prefix that will be prepended to the resolved field name and then the whole thing is converted to SCREAMING_SNAKE_CASE. The environment variable name will be automatically included in the flag usage along with the standard inclusion of the default value. For example, using the option WithEnv("App") along with the following field declaration would render the following usage: To override the naming of a flag, the field can be declared with the tag `flag:"name"` where the given name will be used exactly as the flag name. An empty string for the name indicates the field should be ignored and no flag is declared. For example, Environment variable naming and processing can be overridden with the `env:"name"` tag, where the given name will be used exactly as the mapped environment variable name. If the WithEnv or WithEnvRenamer options were enabled, a field can be excluded from environment variable mapping by giving an empty string. Conversely, environment variable mapping can be enabled per field with `env:"name"` even when the flagsfiller-wide option was not included. For example, This file implements support for all types that support interface encoding.TextUnmarshaler
Package goose implements conversion from Go source to Perennial definitions. The exposed interface allows converting individual files as well as whole packages to a single Coq Ast with all the converted definitions, which include user-defined structs in Go as Coq records and a Perennial procedure for each Go function. See the Goose README at https://github.com/tchajed/goose for a high-level overview. The source also has some design documentation at https://github.com/tchajed/goose/tree/master/docs.
Package rootcerts provides a Go conversion of Mozilla's certdata.txt file, extracting trusted CA certificates only. It was generated using the gencerts tool using the following command line: This package allows for the embedding of root CA certificates directly into a Go executable, reducing or negating the need for Go to have access to root certificates provided by the operating system in order to validate certificates issued by those authorities. Root certificates can be accessed through this package, or may be easily installed into the http package's DefaultTransport by calling UpdateDefaultTransport.
Package cgi implements the common gateway interface (CGI) for Caddy, a modern, full-featured, easy-to-use web server. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server’s responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script’s dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI SCRIPTS SHOULD BE LOCATED OUTSIDE OF CADDY’S DOCUMENT ROOT. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. MISTRUST THE CONTENTS OF PATH_INFO, QUERY_STRING AND STANDARD INPUT. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Additionally, if the Caddy errors directive is enabled, any content the application writes to its standard error stream will be written to the error log. This can be useful to diagnose problems with the execution of the CGI application. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file’s name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application’s ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. The basic cgi directive lets you associate a single pattern with a particular script. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin’s test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy’s process to pass to your script, you will need to use the advanced directive syntax described below. The values used for the script name and its arguments are subject to placeholder replacement. In addition to the standard Caddy placeholders such as {method} and {host}, the following placeholder substitutions are made: - {.} is replaced with Caddy’s current working directory - {match} is replaced with the portion of the request that satisfies the match directive - {root} is replaced with Caddy’s specified root directory You can include glob wildcards in your matches. Basically, an asterisk represents a sequence of zero or more non-slash characters and a question mark represents a single non-slash character. These wildcards can be used multiple times in a match expression. See the documentation for path/Match in the Go standard library for more details about glob matching. Here is an example directive: In this case, the cgi middleware will match requests such as https://example.com/report/weekly.lua and https://example.com/report/report.lua/weekly but not https://example.com/report.lua. The use of the asterisk expands to any character sequence within a directory. For example, if the request is made, the following command is executed: Note that the portion of the request that follows the match is not included. That information is conveyed to the script by means of environment variables. In this example, the Lua interpreter is invoked directly from Caddy, so the Lua script does not need the shebang that would be needed in a standalone script. This method facilitates the use of CGI on the Windows platform. In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, With the advanced syntax, the exec subdirective must appear exactly once. The match subdirective must appear at least once. The env, pass_env, empty_env, and except subdirectives can appear any reasonable number of times. pass_all_env, dir may appear once. The dir subdirective specifies the CGI executable’s working directory. If it is not specified, Caddy’s current working directory is used. The except subdirective uses the same pattern matching logic that is used with the match subdirective except that the request must match a rule fully; no request path prefix matching is performed. Any request that matches a match pattern is then checked with the patterns in except, if any. If any matches are made with the except pattern, the request is rejected and passed along to subsequent handlers. This is a convenient way to have static file resources served properly rather than being confused as CGI applications. The empty_env subdirective is used to pass one or more empty environment variables. Some CGI scripts may expect the server to pass certain empty variables rather than leaving them unset. This subdirective allows you to deal with those situations. The values associated with environment variable keys are all subject to placeholder substitution, just as with the script name and arguments. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. If you protect your CGI application with the Caddy JWT middleware, your program will have access to the token’s payload claims by means of environment variables. For example, the following token claims will be available with the following environment variables All values are conveyed as strings, so some conversion may be necessary in your program. No placeholder substitutions are made on these values. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: The fossil distributed software management tool is a native executable that supports interaction as a CGI application. In this example, /usr/bin/fossil is the executable and /home/quixote/projects.fossil is the fossil repository. To configure Caddy to serve it, use a cgi directive something like this in your Caddyfile: In your /usr/local/cgi-bin directory, make a file named projects with the following single line: The fossil documentation calls this a command file. When fossil is invoked after a request to /projects, it examines the relevant environment variables and responds as a CGI application. If you protect /projects with basic HTTP authentication, you may wish to enable the ALLOW REMOTE_USER AUTHENTICATION option when setting up fossil. This lets fossil dispense with its own authentication, assuming it has an account for the user. The agedu utility can be used to identify unused files that are taking up space on your storage media. Like fossil, it can be used in different modes including CGI. First, use it from the command line to generate an index of a directory, for example In your Caddyfile, include a directive that references the generated index: You will want to protect the /agedu resource with some sort of access control, for example HTTP Basic Authentication. This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The cgit application provides an attractive and useful web interface to git repositories. Here is how to run it with Caddy. After compiling cgit, you can place the executable somewhere out of Caddy’s document root. In this example, it is located in /usr/local/cgi-bin. A sample configuration file is included in the project’s cgitrc.5.txt file. You can use it as a starting point for your configuration. The default location for this file is /etc/cgitrc but in this example the location /home/quixote/caddy/cgitrc. Note that changing the location of this file from its default will necessitate the inclusion of the environment variable CGIT_CONFIG in the Caddyfile cgi directive. When you edit the repository stanzas in this file, be sure each repo.path item refers to the .git directory within a working checkout. Here is an example stanza: Also, you will likely want to change cgit’s cache directory from its default in /var/cache (generally accessible only to root) to a location writeable by Caddy. In this example, cgitrc contains the line You may need to create the cgit subdirectory. There are some static cgit resources (namely, cgit.css, favicon.ico, and cgit.png) that will be accessed from Caddy’s document tree. For this example, these files are placed in a directory named cgit-resource. The following lines are part of the cgitrc file: Additionally, you will likely need to tweak the various file viewer filters such source-filter and about-filter based on your system. The following Caddyfile directive will allow you to access the cgit application at /cgit: Feeling reckless? You can run PHP in CGI mode. In general, FastCGI is the preferred method to run PHP if your application has many pages or a fair amount of database activity. But for small PHP programs that are seldom used, CGI can work fine. You’ll need the php-cgi interpreter for your platform. This may involve downloading the executable or downloading and then compiling the source code. For this example, assume the interpreter is installed as /usr/local/bin/php-cgi. Additionally, because of the way PHP operates in CGI mode, you will need an intermediate script. This one works in Posix environments: This script can be reused for multiple cgi directives. In this example, it is installed as /usr/local/cgi-bin/phpwrap. The argument following -c is your initialization file for PHP. In this example, it is named /home/quixote/.config/php/php-cgi.ini. Two PHP files will be used for this example. The first, /usr/local/cgi-bin/sample/min.php, looks like this: The second, /usr/local/cgi-bin/sample/action.php, follows: The following directive in your Caddyfile will make the application available at sample/min.php: This examples demonstrates printing a CGI rule
Package dsc - datastore connectivity library This library provides connection capabilities to SQL, noSQL datastores or structured files providing sql layer on top of it. For native database/sql it is just a ("database/sql") proxy, and for noSQL it supports simple SQL that is being translated to put/get,scan,batch native NoSQL operations. Datastore Manager implements read, batch (no insert nor update), and delete operations. Read operation requires data record mapper, Persist operation requires dml provider. Delete operation requries key provider. Datastore Manager provides default record mapper and dml/key provider for a struct, if no actual implementation is passed in. 1 column - name of datastore field/column 2 autoincrement - boolean flag to use autoincrement, in this case on insert the value can be automatically set back on application model class 3 primaryKey - boolean flag primary key 4 dateLayout - date layout check string to time.Time conversion 4 dateFormat - date format check java simple date format 5 sequence - name of sequence used to generate next id 6 transient - boolean flag to not map a field with record data 7 valueMap - value maping that will be applied after fetching a record and before writing it to datastore. Usage:
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package sqlite implements a database/sql driver for SQLite3. This driver requires a file: URI always be used to open a database. For details see https://sqlite.org/c3ref/open.html#urifilenames. If you want to do initial configuration of a connection, or enable tracing, use the Connector function: In-memory databases are popular for tests. Use the "memdb" VFS (*not* the legacy in-memory modes) to be compatible with the database/sql connection pool: Use a different dbname for each memory database opened. SQLite is flexible about type conversions, and so is this driver. Almost all "basic" Go types (int, float64, string) are accepted and directly mapped into SQLite, even if they are named Go types. The time.Time type is also accepted (described below). Values that implement encoding.TextMarshaler or json.Marshaler are stored in SQLite in their marshaled form. While SQLite3 has no strict time datatype, it does have a series of built-in functions that operate on timestamps that expect columns to be in one of many formats: https://sqlite.org/lang_datefunc.html When encoding a time.Time into one of SQLite's preferred formats, we use the shortest timestamp format that can accurately represent the time.Time. The supported formats are: If the time.Time is not UTC (strongly consider storing times in UTC!), we follow SQLite's norm of appending "[+-]HH:MM" to the above formats. It is common in SQLite to store "Unix time", seconds-since-epoch in an INTEGER column. This is understood by the date and time functions documented in the link above. If you want to do that, pass the result of time.Time.Unix to the driver. In general, time is hard to extract from SQLite as a time.Time. If a column is defined as DATE or DATETIME, then text data is parsed as TimeFormat and returned as a time.Time. Integer data is parsed as seconds since epoch and returned as a time.Time.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package awk implements AWK-style processing of input streams. The awk package can be considered a shallow EDSL (embedded domain-specific language) for Go that facilitates text processing. It aims to implement the core semantics provided by AWK, a pattern scanning and processing language defined as part of the POSIX 1003.1 standard (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) and therefore part of all standard Linux/Unix distributions. AWK's forte is simple transformations of tabular data. For example, the following is a complete AWK program that reads an entire file from the standard input device, splits each file into whitespace-separated columns, and outputs all lines in which the fifth column is an odd number: Here's a typical Go analogue of that one-line AWK program: The goal of the awk package is to emulate AWK's simplicity while simultaneously taking advantage of Go's speed, safety, and flexibility. With the awk package, the preceding code reduces to the following: While not a one-liner like the original AWK program, the above is conceptually close to it. The AppendStmt method defines a script in terms of patterns and actions exactly as in the AWK program. The Run method then runs the script on an input stream, which can be any io.Reader. For those programmers unfamiliar with AWK, an AWK program consists of a sequence of pattern/action pairs. Each pattern that matches a given line causes the corresponding action to be performed. AWK programs tend to be terse because AWK implicitly reads the input file, splits it into records (default: newline-terminated lines), and splits each record into fields (default: whitespace-separated columns), saving the programmer from having to express such operations explicitly. Furthermore, AWK provides a default pattern, which matches every record, and a default action, which outputs a record unmodified. The awk package attempts to mimic those semantics in Go. Basic usage consists of three steps: 1. Script allocation (awk.NewScript) 2. Script definition (Script.AppendStmt) 3. Script execution (Script.Run) In Step 2, AppendStmt is called once for each pattern/action pair that is to be appended to the script. The same script can be applied to multiple input streams by re-executing Step 3. Actions to be executed on every run of Step 3 can be supplied by assigning the script's Begin and End fields. The Begin action is typically used to initialize script state by calling methods such as SetRS and SetFS and assigning user-defined data to the script's State field (what would be global variables in AWK). The End action is typically used to store or report final results. To mimic AWK's dynamic type system. the awk package provides the Value and ValueArray types. Value represents a scalar that can be coerced without error to a string, an int, or a float64. ValueArray represents a—possibly multidimensional—associative array of Values. Both patterns and actions can access the current record's fields via the script's F method, which takes a 1-based index and returns the corresponding field as a Value. An index of 0 returns the entire record as a Value. The following AWK features and GNU AWK extensions are currently supported by the awk package: • the basic pattern/action structure of an AWK script, including BEGIN and END rules and range patterns • control over record separation (RS), including regular expressions and null strings (implying blank lines as separators) • control over field separation (FS), including regular expressions and null strings (implying single-character fields) • fixed-width fields (FIELDWIDTHS) • fields defined by a regular expression (FPAT) • control over case-sensitive vs. case-insensitive comparisons (IGNORECASE) • control over the number conversion format (CONVFMT) • automatic enumeration of records (NR) and fields (NR) • "weak typing" • multidimensional associative arrays • premature termination of record processing (next) and script processing (exit) • explicit record reading (getline) from either the current stream or a specified stream • maintenance of regular-expression status variables (RT, RSTART, and RLENGTH) For more information about AWK and its features, see the awk(1) manual page on any Linux/Unix system (available online from, e.g., http://linux.die.net/man/1/awk) or read the book, "The AWK Programming Language" by Aho, Kernighan, and Weinberger. A number of examples ported from the POSIX 1003.1 standard document (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) are presented below.
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
bíogo is a bioinformatics library for the Go language. It is a work in progress. bíogo stems from the need to address the size and structure of modern genomic and metagenomic data sets. These properties enforce requirements on the libraries and languages used for analysis: In addition to the computational burden of massive data set sizes in modern genomics there is an increasing need for complex pipelines to resolve questions in tightening problem space and also a developing need to be able to develop new algorithms to allow novel approaches to interesting questions. These issues suggest the need for a simplicity in syntax to facilitate: Related to the second issue is the reluctance of some researchers to release code because of quality concerns http://www.nature.com/news/2010/101013/full/467753a.html The issue of code release is the first of the principles formalised in the Science Code Manifesto http://sciencecodemanifesto.org/ A language with a simple, yet expressive, syntax should facilitate development of higher quality code and thus help reduce this barrier to research code release. It seems that nearly every language has it own bioinformatics library, some of which are very mature, for example BioPerl and BioPython. Why add another one? The different libraries excel in different fields, acting as scripting glue for applications in a pipeline (much of [1-3]) and interacting with external hosts [1, 2, 4, 5], wrapping lower level high performance languages with more user friendly syntax [1-4] or providing bioinformatics functions for high performance languages [5, 6]. The intended niche for bíogo lies somewhere between the scripting libraries and high performance language libraries in being easy to use for both small and large projects while having reasonable performance with computationally intensive tasks. The intent is to reduce the level of investment required to develop new research software for computationally intensive tasks. The bíogo library structure is influenced both by the structure of BioPerl and the Go core libraries. The coding style should be aligned with normal Go idioms as represented in the Go core libraries. Position numbering in the bíogo library conforms to the zero-based indexing of Go and range indexing conforms to Go's half-open zero-based slice indexing. This is at odds with the 'normal' inclusive indexing used by molecular biologists. This choice was made to avoid inconsistent indexing spaces being used — one-based inclusive for bíogo functions and methods and zero-based for native Go slices and arrays — and so avoid errors that this would otherwise facilitate. Note that the GFF package does allow, and defaults to, one-based inclusive indexing in its input and output of GFF files. Quality scores are supported for all sequence types, including protein. Phred and Solexa scoring systems are able to be read from files, however internal representation of quality scores is with Phred, so there will be precision loss in conversion. A Solexa quality score type is provided for use where this will be a problem. Copyright ©2011-2012 The bíogo Authors except where otherwise noted. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
bíogo is a bioinformatics library for the Go language. It is a work in progress. bíogo stems from the need to address the size and structure of modern genomic and metagenomic data sets. These properties enforce requirements on the libraries and languages used for analysis: In addition to the computational burden of massive data set sizes in modern genomics there is an increasing need for complex pipelines to resolve questions in tightening problem space and also a developing need to be able to develop new algorithms to allow novel approaches to interesting questions. These issues suggest the need for a simplicity in syntax to facilitate: Related to the second issue is the reluctance of some researchers to release code because of quality concerns http://www.nature.com/news/2010/101013/full/467753a.html The issue of code release is the first of the principles formalised in the Science Code Manifesto http://sciencecodemanifesto.org/ A language with a simple, yet expressive, syntax should facilitate development of higher quality code and thus help reduce this barrier to research code release. It seems that nearly every language has it own bioinformatics library, some of which are very mature, for example BioPerl and BioPython. Why add another one? The different libraries excel in different fields, acting as scripting glue for applications in a pipeline (much of [1-3]) and interacting with external hosts [1, 2, 4, 5], wrapping lower level high performance languages with more user friendly syntax [1-4] or providing bioinformatics functions for high performance languages [5, 6]. The intended niche for bíogo lies somewhere between the scripting libraries and high performance language libraries in being easy to use for both small and large projects while having reasonable performance with computationally intensive tasks. The intent is to reduce the level of investment required to develop new research software for computationally intensive tasks. The bíogo library structure is influenced both by the structure of BioPerl and the Go core libraries. The coding style should be aligned with normal Go idioms as represented in the Go core libraries. Position numbering in the bíogo library conforms to the zero-based indexing of Go and range indexing conforms to Go's half-open zero-based slice indexing. This is at odds with the 'normal' inclusive indexing used by molecular biologists. This choice was made to avoid inconsistent indexing spaces being used — one-based inclusive for bíogo functions and methods and zero-based for native Go slices and arrays — and so avoid errors that this would otherwise facilitate. Note that the GFF package does allow, and defaults to, one-based inclusive indexing in its input and output of GFF files. Quality scores are supported for all sequence types, including protein. Phred and Solexa scoring systems are able to be read from files, however internal representation of quality scores is with Phred, so there will be precision loss in conversion. A Solexa quality score type is provided for use where this will be a problem. Copyright ©2011-2012 The bíogo Authors except where otherwise noted. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package main is the UBNT edgeos-dnsmasq-blacklist dnsmasq DNS Blacklisting and Redirection. View the software license here (https://github.com/britannic/blacklist/blob/master/LICENSE.txt)Latest versionVersion (https://github.com/britannic/blacklist)Go documentationGoDoc (https://godoc.org/github.com/britannic/blacklist)Build status for this versionBuild Status (https://travis-ci.org/britannic/blacklist)Test coverage status for this versionCoverage Status (https://coveralls.io/github/britannic/blacklist?branch=master)Quality of Go code for this versionGo Report Card (https://goreportcard.com/report/github.com/britannic/blacklist) Follow the conversation @ community.ubnt.com (https://community.ubnt.com/t5/EdgeRouter/DNS-Adblocking-amp-Blacklisting-dnsmasq-Configuration/td-p/2215008/jump-to/first-unread-message "Follow the conversation about this software in the EdgeRouter forum (https://community.ubnt.com/t5/EdgeRouter/)") Please show your thanks by donating to the project using Securely send and receive cash without fees using Square CashSquare Cash (https://cash.me/$HelmRockSecurity/) or PayPal (https://www.paypal.me/helmrocksecurity/) Donate (https://cash.me/$HelmRockSecurity/5 "Give $5 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/10 "Give $10 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/15 "Give $15 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/20 "Give $20 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/25 "Give $25 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/50 "Give $50 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/100 "Give $100 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/ "Choose your own donation amount using Square Cash (free money transfer)") Donate (https://paypal.me/helmrocksecurity/5 "Give $5 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/10 "Give $10 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/15 "Give $15 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/20 "Give $20 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/25 "Give $25 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/50 "Give $50 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/100 "Give $100 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/ "Choose your own donation amount using PayPal (PayPal money transfer)") We greatly appreciate any and all donations - Thank you! Funds go to maintaining development servers and networks. Note: This is 3rd party software and isn't supported or endorsed by Ubiquiti Networks® • Overview (#overview) • Donate (#donations-and-sponsorship) • Copyright (#copyright) • Licenses (#licenses) • Latest Version (#latest-version) • Change Log (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) • Features (#features) • Compatibility (#compatibility) • Installation (#installation) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) • Upgrade (#upgrade) • Removal (#removal) • Frequently Asked Questions (#frequently-asked-questions) • Can I donate to project? (#donations-and-sponsorship) • Does the install backup my blacklist configuration before deleting it? (#does-the-install-backup-my-blacklist-configuration-before-deleting-it) • Does update-dnsmasq run automatically? (#does-update-dnsmasq-run-automatically) • How do I add or delete sources? (#how-do-i-add-or-delete-sources) • How do I back up my blacklist configuration and restore it later? (#how-do-i-back-up-my-blacklist-configuration-and-restore-it-later) • How do I configure dnsmasq? (#how-do-i-configure-dnsmasq) • How do I configure local file sources instead of internet based ones? (#how-do-i-configure-local-file-sources-instead-of-internet-based-ones) • How do I disable/enable dnsmasq blacklisting? (#how-do-i-disableenable-dnsmasq-blacklisting) • How do I exclude or include a host or a domain? (#how-do-i-exclude-or-include-a-host-or-a-domain) • How do I globally exclude or include hosts or a domains? (#how-do-i-globally-exclude-or-include-hosts-or-a-domains) • How do I use the command line switches? (#how-do-i-use-the-command-line-switches) • How do can keep my USG configuration after an upgrade, provision or reboot? (#how-do-can-keep-my-usg-configuration-after-an-upgrade-provision-or-reboot) • How does whitelisting work? (#how-does-whitelisting-work) • What is the difference between blocking domains and hosts? (#what-is-the-difference-between-blocking-domains-and-hosts) • Which blacklist sources are installed by default? (#which-blacklist-sources-are-installed-by-default) EdgeMax dnsmasq DNS blacklisting and redirection is inspired by the users at EdgeMAX Community (https://community.ubnt.com/t5/EdgeMAX/bd-p/EdgeMAX/) [Top] (#contents) • Copyright © Visit Helm Rock Consulting at https://www.helmrock.com/2019 Helm Rock Consulting (https://www.helmrock.com/) [Top] (#contents) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the FreeBSD Project. [Top] (#contents) Latest versionLatest (https://github.com/britannic/blacklist/releases/latest) Release v1.1.6.2 (April 24, 2018) • Code refactor • Global whitelist and blacklist configuration files now have their own prefix: "roots" i.e. [Top] (#contents) • See changelog (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) for details. [Top] (#contents) • Adds DNS blacklisting integration to the EdgeRouter configuration • Generates configuration files used directly by dnsmasq to redirect dns lookups • Integrated with the EdgeMax OS CLI • Any FQDN in the blacklist will force dnsmasq to return the configured dns redirect IP address [Top] (#contents) • edgeos-dnsmasq-blacklist has been tested on the EdgeRouter ERLite-3, ERPoe-5, ER-X and UniFi Security Gateway USG-3 routers • EdgeMAX versions: v1.9.7+hotfix.4-v1.10.1, UniFi: v4.4.12-v4.4.18 • integration could be adapted to work on VyOS and Vyatta derived ports, since EdgeOS is a fork and port of Vyatta 6.3 [Top] (#contents) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) - works for all routers • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) - best for disk space constrained routers [Top] (#contents) apt-get Installation - ERLite-3, ERPoe-5, ER-X, ER-X-SFP & UniFi-Gateway-3 • Add the blacklist debian package repository using the router's CLI shell • Add the GPG signing key • Update the system repositorities and install edgeos-dnsmasq-blacklist [Top] (#contents) dpkg Installation - best for disk space constrained routers EdgeRouter ERLite-3, ERPoe-5 & UniFi-Gateway-3 [Top] (#contents) EdgeRouter ER-X & ER-X-SFP • Ensure the router has enough space, by removing unnecessary files • Now download and install the edgeos-dnsmasq-blacklist package [Top] (#contents) • If the repository is set up and you are using apt-get: • Note, if you are using dpkg, it cannot upgrade packages, so follow these instructions (#dpkg-installation---best-for-disk-space-constrained-routers) and the previous package version will be automatically removed before the new package version is installed [Top] (#contents) EdgeMAX - All Platforms [Top] (#contents) How do I disable/enable dnsmasq blacklisting? • Use these CLI configure commands: • Disable: • Enable: [Top] (#contents) Does the install backup my blacklist configuration before deleting it? • If a blacklist configuration already exists, the install routine will automatically back it up to /config/user-data/blacklist.$(date +'%FT%H%M%S').cmds [Top] (#contents) How do I back up my blacklist configuration and restore it later? • use the following commands (make a note of the file name): • After installing the latest version, you can merge your backed up configuration: • If you prefer to delete the default configuration and restore your previous configuration, run these commands: [Top] (#contents) Which blacklist sources are installed by default? • You can use this command in the CLI shell to view the current sources after installation or view the log and see previous downloads: [Top] (#contents) How do I configure local file sources instead of internet based ones? • Use these commands to configure a local file source • File contents example for /config/user-data/blist.hosts.src: [Top] (#contents) How do can keep my USG configuration after an upgrade, provision or reboot? • Follow these instructions (https://britannic.github.io/install-edgeos-packages/) on how to automatically install edgeos-dnsmasq-blacklist • Create a config.gateway.json file following these instructions (https://help.ubnt.com/hc/en-us/articles/215458888-UniFi-How-to-further-customize-USG-configuration-with-config-gateway-json) • Here's a sample config.gateway.json (https://raw.githubusercontent.com/britannic/blacklist/master/config.gateway.json) [Top] (#contents) How do I add or delete sources? • Using the CLI configure command, to delete domains and hosts sources: • To add a source, first check it can serve a text list and also note the prefix (if any) before the hosts or domains, e.g. http://www.malwaredomainlist.com/ (http://www.malwaredomainlist.com/) has this format: • So the prefix is "127.0.0.1 " • Here's how to creating the source in the CLI: [Top] (#contents) How do I globally exclude or include hosts or a domains? • Use these example commands to globally include or exclude blacklisted entries: [Top] (#contents) How do I exclude or include a host or a domain? • Use these example commands to include or exclude blacklisted entries: [Top] (#contents) How does whitelisting work? *dnsmasq will whitelist any entries in the configuration file domains and hosts (servers) with a hash in place of an IP address (the "#" force dnsmasq to forward the DNS request to the router's configured nameservers) • i.e. servers (hosts) • i.e. domains [Top] (#contents) Does update-dnsmasq run automatically? • Yes, a scheduled task is created and run daily at midnight with a random start delay is used ensure other routers in the same time zone won't overload the source servers. • The random start delay window is configured in seconds using this command - this example sets the start delay between 1-10800 seconds (0-3 hours): • It can be reconfigured using these CLI configuration commands: • For example, to change the execution interval to every 6 hours, use this command: • In daily use, no additional interaction with update-dnsmasq is required. By default, cron will run update-dnsmasq at midnight each day to download the blacklist sources and update the dnsmasq configuration files in /etc/dnsmasq.d. dnsmasq will automatically be reloaded after the configuration file update is completed. [Top] (#contents) How do I use the command line switches? • update-dnsmasq has the following commandline switches available: [Top] (#contents) How do I configure dnsmasq? • dnsmasq may need to be configured to ensure blacklisting works correctly • Here is an example using the EdgeOS configuration shell [Top] (#contents) What is the difference between blocking domains and hosts? • The difference lies in the order of update-dnsmasq's processing algorithm. Domains are processed first and take precedence over hosts, so that a blacklisted domain will force update-dnsmasq's source parser to exclude subsequent hosts from the same domain. This reduces dnsmasq's list of lookups, since it will automatically redirect hosts for a blacklisted domain. [Top] (#contents) blacklist
Package properties provides functions for reading and writing ISO-8859-1 and UTF-8 encoded .properties files and has support for recursive property expansion. Java properties files are ISO-8859-1 encoded and use Unicode literals for characters outside the ISO character set. Unicode literals can be used in UTF-8 encoded properties files but aren't necessary. To load a single properties file use MustLoadFile(): To load multiple properties files use MustLoadFiles() which loads the files in the given order and merges the result. Missing properties files can be ignored if the 'ignoreMissing' flag is set to true. Filenames can contain environment variables which are expanded before loading. All of the different key/value delimiters ' ', ':' and '=' are supported as well as the comment characters '!' and '#' and multi-line values. Properties stores all comments preceding a key and provides GetComments() and SetComments() methods to retrieve and update them. The convenience functions GetComment() and SetComment() allow access to the last comment. The WriteComment() method writes properties files including the comments and with the keys in the original order. This can be used for sanitizing properties files. Property expansion is recursive and circular references and malformed expressions are not allowed and cause an error. Expansion of environment variables is supported. The default property expansion format is ${key} but can be changed by setting different pre- and postfix values on the Properties object. Properties provides convenience functions for getting typed values with default values if the key does not exist or the type conversion failed. As an alternative properties may be applied with the standard library's flag implementation at any time. Properties provides several MustXXX() convenience functions which will terminate the app if an error occurs. The behavior of the failure is configurable and the default is to call log.Fatal(err). To have the MustXXX() functions panic instead of logging the error set a different ErrorHandler before you use the Properties package. You can also provide your own ErrorHandler function. The only requirement is that the error handler function must exit after handling the error. Properties can also be loaded into a struct via the `Decode` method, e.g. See `Decode()` method for the full documentation. The following documents provide a description of the properties file format. http://en.wikipedia.org/wiki/.properties http://docs.oracle.com/javase/7/docs/api/java/util/Properties.html#load%28java.io.Reader%29
Package gohg is a Go client library for using the Mercurial dvcs via it's Command Server. For Mercurial see: http://mercurial.selenic.com. For the Hg Command Server see: http://mercurial.selenic.com/wiki/CommandServer. ▪ Mercurial For Mercurial any version starting from 1.9 should be ok, cause that's the one where the Command Server was introduced. If you send wrong options to it through gohg, or commands or options not yet supported (or obsolete) in your Hg version, you'll simply get back an error from Hg itself, as gohg does not check them. But on the other hand gohg allows issuing new commands, not yet implemented by gohg; see further. ▪ Go Currently gohg is currently developed with Go1.2.1. Though I started with the Go1.0 versions, I can't remember having had to change one or two minor things when moving to Go1.1.1. Updating to Go1.1.2 required no changes at all. I had an issue though with Go1.2, on Windows only, causing some tests using os.exec.Command to fail. I'll have to look into that further, to find out if I should report a bug. ▪ Platform I'm developing and testing both on Windows 7 and Ubuntu 12.04/13.04/13.10. But I suppose it should work on any other platform that supports Hg and Go. Only Go and it's standard library. And Mercurial should be installed of course. At the commandline type: to have gohg available in your GOPATH. Start with importing the gohg package. Examples: All interaction with the Mercurial Command Server (Hg CS from now on) happens through the HgClient type, of which you have to create an instance: Then you can connect the Hg CS as follows: 1. The Hg executable: The first parameter is the Mercurial command to use (which 'hg'). You can leave it blanc to let the gohg tool use the default Mercurial command on the system. Having a parameter for the Hg command allows for using a different Hg version, for testing purposes for instance. 2. The repository path: The second parameter is the path to the repository you want to work on. You can leave it blanc to have gohg use the repository it can find for the current path you are running the program in (searching upward in the folder tree eventually). 3. The config for the session: The third parameter allows to provide extra configuration for the session. Though this is currently not implemented yet. 4. Should gohg create a new repo before connecting? This fourth parameter allows you to indicate that you want gohg to first create a new Mercurial repo if it does not already exist in the path given by the second parameter. See the documentation for more detailed info. 5. The returnvalue: The HgClient.Connect() method eventually returns an error, so you can check if the connection succeeded, and if it is safe to go on. Once the work is done, you can disconnect the Hg CS using a typical Go idiom: The gohg tool sets some environment variables for the Hg CS session, to ensure it's good working: Once we have a connection to a Hg CS we can do some work with the repository. This is done with commands, and gohg offers 3 ways to use them. 1. The command methods of the HgClient type. 2. The HgCmd type. 3. The ExecCmd() method of the HgClient type. Each of which has its own reason of existence. Commands return a byte slice containing the resulting data, and eventually an error. But there are a few exceptions (see api docs). If a command fails, the returned error contains 5 elements: 1) the name of the internal routine where the error was trapped, 2) the name of the HgClient command that was run, 3) the returncode by Mercurial, 4) the full command that was passed to the Hg CS, and 5) the eventual error message returned by Mercurial. So the command could return something like the following in the err variable when it fails: The command aliases (like 'id' for 'identify') are not implemented. But there are examples in identify.go and showconfig.go of how you can easily implement them yourself. This is the easiest way, a kind of convenience. And the most readable too. A con is that as a user you cannot know the exact command that was passed to Hg, without some extra mechanics. Each command has the same name as the corresponding Hg command, except it starts with a capital letter of course. An example (also see examples/example1.go): Note that these methods all use the HgCmd type internally. As such they are convenience wrappers around that type. You could also consider them as a kind of syntactic sugar. If you just want to simply issue a command, nothing more, they are the way to go. The only way to obtain the commandstring sent to Hg when using these command methods, is by calling the HgClient.ShowLastCmd() method afterwards before issuing any other commands: Using the HgCmd type is kind of the standard way. It is a struct that you can instantiate for any command, and for which you can set elements Name, Options and Params (see the api docs for more details). It allows for building the command step by step, and also to query the exact command that will be sent to the Hg CS. A pro of this method is that it allows you to obtain the exact command string that will be passed to Mercurial before it is performed, by calling the CmdLine() method of HgCmd. This could be handy for logging, or for showing feedback to the user in a GUI program. (You could even call CmdLine() several times, and show the building of the command step by step.) An example (also see examples/example2.go): As you can see, this way requires some more coding. The source code will also show you that the HgCmd type is indeed used as the underlying type for the convenience HgClient commands, in all the New<hg-command>Cmd() constructors. The HgClient type has an extra method ExecCmd(), allowing you to pass a fully custom built command to Hg. It accepts a string slice that is supposed to contain all the elements of the complete command, as you would type it at the command line. It could be a convenient way for performing commands that are not yet implemented in gohg, or to make use of extensions to Hg (for which gohg offers no support (yet?)). An example (also see examples/example3.go): Just like on the commandline, options come before parameters. Options to commands use the same name as the long form of the Mercurial option they represent, but start with the necessary capital letter. An options value can be of type bool, int or string. You just pass the value as the parameter to the option (= type conversion of the value to the option type). You can pass any number of options, as the elements of a slice. Options can occur more than once if appropriate (see the ones marked with '[+]' in the Mercurial help). Parameters are used to provide any arguments for a command that are not options. They are passed in as a string or a slice of strings, depending on the command. These parameters typically contain revisions, paths or filenames and so. The gohg tool only checks if the options the caller gives are valid for that command. It does not check if the values are valid for the combination of that command and that option, as that is done by Mercurial. No need to implement that again. If an option is not valid for a command, it is silently ignored, so it is not passed to the Hg CS. A few options are not implemented, as they seemed not relevant for use with this tool (for instance: the global --color option, or the --print0 option for status). The gohg tool only returns errors, with an as clear as possible message, and never uses log.Fatal() nor panics, even if those may seem appropriate. It leaves it up to the caller to do that eventually. It's not up to this library to decide whether to do a retry or to abort the complete application. ▪ The following config settings are fixated in the code (at least for now): ▪ As mentioned earlier, passing config info is not implemented yet. ▪ Currently the only support for extensions to Mercurial is through the ExecCmd method. ▪ If multiple Hg CSs are used against the same repo, it is up to Mercurial to handle this correctly. ▪ Mercurial is always run in english. Internationalization is not necessary here, as the conversation with Hg is internal to the application. Please note that this tool is still in it's very early stages. If you have suggestions or requests, or experience any problems, please use the issue tracker at https://bitbucket.org/gohg/gohg/issues?status=new&status=open. Or you could send a patch or a pull request. Copyright 2012-2014, The gohg Authors. All rights reserved. Use of this source code is governed by a BSD style license that can be found in the LICENSE.md file.
Package conv provides fast and intuitive conversions across Go types. Bool Conversions supports all the paths provided by the standard libraries strconv.ParseBool when converting from a string, all other conversions are simply true when not the types zero value. As a special case zero length map and slice types are also false, even if initialized. Duration conversion supports all the paths provided by the standard libraries time.ParseDuration when converting from strings, with a couple enhancements outlined below. Map conversion will infer the conversion functions to use from the key and element types of the given map. The second argument will be walked as described in the supporting package, go-iter. An error is returned if the below restrictions are not met: Excerpt from github.com/cstockton/go-iter iter.Walk: Walk will recursively walk the given interface value as long as an error does not occur. The pair func will be given a interface value for each value visited during walking and is expected to return an error if it thinks the traversal should end. A nil value and error is given to the walk func if an inaccessible value (can't reflect.Interface()) is found. Walk is called on each element of maps, slices and arrays. If the underlying iterator is configured for channels it receives until one fails. Channels should probably be avoided as ranging over them is more concise. Numeric conversion from other numeric values of an identical type will be returned without modification. Numeric conversions deviate slightly from Go when dealing with under/over flow. When performing a conversion operation that would overflow, we instead assign the maximum value for the target type. Similarly, conversions that would underflow are assigned the minimun value for that type, meaning unsigned integers are given zero values isntead of spilling into large positive integers. All methods and functions accept any type of value for conversion, if unable to find a reasonable conversion path they will return the target types zero value. The Conv struct will also report an error on failure, while all the top level functions (conv.Bool(...), conv.Time(...), etc) will only return a single value for cases that you wish to leverage zero values. These functions are powered by the "DefaultConverter" variable so you may replace it with your own Converter or a Conv struct to adjust behavior. In short, panics should not occur within this library under any circumstance. This obviously excludes any oddities that may surface when the runtime is not in a healthy state, i.e. uderlying system instability, memory exhaustion. If you are able to create a reproducible panic please file a bug report. Slice conversion will infer the element type from the given slice, using the associated conversion function as the given structure is traversed recursively. The behavior if the value is mutated during iteration is undefined, though at worst an error will be returned as this library will never panic. An error is returned if the below restrictions are not met: String conversion from any values outside the cases below will simply be the result of calling fmt.Sprintf("%v", value), meaning it can not fail. An error is still provided and you should check it to be future proof.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
The clnt package go9provides definitions and functions used to implement a 9P2000 file client. The p9 package go9provides the definitions and functions used to implement the 9P2000 protocol. TODO. All the packet conversion code in this file is crap and needs a rewrite. The srv package go9provides definitions and functions used to implement a 9P2000 file server.
Vet examines Go source code and reports suspicious constructs, such as Printf calls whose arguments do not align with the format string. Vet uses heuristics that do not guarantee all reports are genuine problems, but it can find errors not caught by the compilers. Vet is normally invoked using the go command by running "go vet": vets the package in the current directory. vets the package whose path is provided. Use "go help packages" to see other ways of specifying which packages to vet. Vet's exit code is 2 for erroneous invocation of the tool, 1 if a problem was reported, and 0 otherwise. Note that the tool does not check every possible problem and depends on unreliable heuristics so it should be used as guidance only, not as a firm indicator of program correctness. By default the -all flag is set so all checks are performed. If any flags are explicitly set to true, only those tests are run. Conversely, if any flag is explicitly set to false, only those tests are disabled. Thus -printf=true runs the printf check, -printf=false runs all checks except the printf check. By default vet uses the object files generated by 'go install some/pkg' to typecheck the code. If the -source flag is provided, vet uses only source code. Available checks: Flag: -asmdecl Mismatches between assembly files and Go function declarations. Flag: -assign Check for useless assignments. Flag: -atomic Common mistaken usages of the sync/atomic package. Flag: -bool Mistakes involving boolean operators. Flag: -buildtags Badly formed or misplaced +build tags. Flag: -cgocall Detect some violations of the cgo pointer passing rules. Flag: -composites Composite struct literals that do not use the field-keyed syntax. Flag: -copylocks Locks that are erroneously passed by value. Flag: -httpresponse Mistakes deferring a function call on an HTTP response before checking whether the error returned with the response was nil. Flag: -lostcancel The cancelation function returned by context.WithCancel, WithTimeout, and WithDeadline must be called or the new context will remain live until its parent context is cancelled. (The background context is never cancelled.) Flag: -methods Non-standard signatures for methods with familiar names, including: Flag: -nilfunc Comparisons between functions and nil. Flag: -printf Suspicious calls to functions in the Printf family, including any functions with these names, disregarding case: The -printfuncs flag can be used to redefine this list. If the function name ends with an 'f', the function is assumed to take a format descriptor string in the manner of fmt.Printf. If not, vet complains about arguments that look like format descriptor strings. It also checks for errors such as using a Writer as the first argument of Printf. Flag: -rangeloops Incorrect uses of range loop variables in closures. Flag: -shadow=false (experimental; must be set explicitly) Variables that may have been unintentionally shadowed. Flag: -shift Shifts equal to or longer than the variable's length. Flag: -structtags Struct tags that do not follow the format understood by reflect.StructTag.Get. Well-known encoding struct tags (json, xml) used with unexported fields. Flag: -tests Mistakes involving tests including functions with incorrect names or signatures and example tests that document identifiers not in the package. Flag: -unreachable Unreachable code. Flag: -unsafeptr Likely incorrect uses of unsafe.Pointer to convert integers to pointers. A conversion from uintptr to unsafe.Pointer is invalid if it implies that there is a uintptr-typed word in memory that holds a pointer value, because that word will be invisible to stack copying and to the garbage collector. Flag: -unusedresult Calls to well-known functions and methods that return a value that is discarded. By default, this includes functions like fmt.Errorf and fmt.Sprintf and methods like String and Error. The flags -unusedfuncs and -unusedstringmethods control the set. These flags configure the behavior of vet: For testing and debugging vet can be run directly by invoking "go tool vet" or just running the binary. Run this way, vet might not have up to date information for imported packages. vets the files named, all of which must be in the same package. recursively descends the directory, vetting each package it finds. Vet is a simple checker for static errors in Go source code. See doc.go for more information.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line. It is currently not valid to specify options from the parent level of the command after the command name has occurred. Thus, given a top-level option "-v" and a command "add": go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
rm2pdf MIT Licensed RCL January 2020 This programme attempts to create annotated A4 PDF files from reMarkable tablet file groups (RM bundles), including .rm files recording marks. Normally these files will be in a local directory, such as an xochitl directory synchronised to a tablet over sshfs. The programme takes as input either: * The path to the PDF file which has had annotations made to it * The path to the RM bundle with uuid, such as <path>/<uuid> with no filename extension, together with a PDF template to use for the background (a blank A4 template is provided in templates/A4.pdf). The resulting PDF is layered with the background and .rm file layers each in a separated PDF layer. The .rm file marks are stroked using the fpdf PDF library, although .rm tilt and pressure characteristics are not represented in the PDF output. PDF files from sources such as Microsoft Word do not always work well. It can help to rewrite them using the pdftk tool, e.g. by doing Custom colours for some pens can be specified using the -c or --colours switch, which overrides the default pen selection. A second -c switch sets the colours on the second layer, and so on. Example of processing an rm bundle without a pdf: Example of processing an rm bundle with a pdf, and per-layer colours: General options: Warning: the OutputFile will be overwritten if it exists. The parser is a go port of reMarkable tablet "lines" or ".rm" file parser, with binary decoding hints drawn from rm2svg https://github.com/reHackable/maxio/blob/master/tools/rM2svg which in turn refers to https://github.com/lschwetlick/maxio/tree/master/tools. Python struct format codes referred to in the parser, such as "<{}sI" are from rm2svg. RMParser provides a python-like iterator based on bufio.Scan, which iterates over the referenced reMarkable .rm file returning a data structure consisting of each path with its associated layer and path segments. Usage example: Pen selections are hard-coded in stroke.go with widths, opacities and colours. The StrokeSetting interface "Width" is used to scale strokes based on nothing more than what seems to be about right. Resolving the page sizes and reMarkable output resolution was based on the reMarkable png templates and viewing the reMarkable app's output x and y widths. These dimensions are noted in pdf.go in PDF_WIDTH_IN_MM and PDF_HEIGHT_IN_MM. Conversion from mm to points (MM_TO_RMPOINTS) and from points to the resolution of the reMarkable tablet (PTS_2_RMPTS) is also set in pdf.go. The theoretical conversion factor is slightly altered based on the output from various tests, including those in the testfiles directory. To view the testfiles after processing use or alter the paths used in the tests.
Package flags provides an extensive command line option parser. The flags package is similar in functionality to the go built-in flag package but provides more options and uses reflection to provide a convenient and succinct way of specifying command line options. The following features are supported in go-flags: Additional features specific to Windows: The flags package uses structs, reflection and struct field tags to allow users to specify command line options. This results in very simple and concise specification of your application options. For example: This specifies one option with a short name -v and a long name --verbose. When either -v or --verbose is found on the command line, a 'true' value will be appended to the Verbose field. e.g. when specifying -vvv, the resulting value of Verbose will be {[true, true, true]}. Slice options work exactly the same as primitive type options, except that whenever the option is encountered, a value is appended to the slice. Map options from string to primitive type are also supported. On the command line, you specify the value for such an option as key:value. For example Then, the AuthorInfo map can be filled with something like -a name:Jesse -a "surname:van den Kieboom". Finally, for full control over the conversion between command line argument values and options, user defined types can choose to implement the Marshaler and Unmarshaler interfaces. The following is a list of tags for struct fields supported by go-flags: Either the `short:` tag or the `long:` must be specified to make the field eligible as an option. Option groups are a simple way to semantically separate your options. All options in a particular group are shown together in the help under the name of the group. Namespaces can be used to specify option long names more precisely and emphasize the options affiliation to their group. There are currently three ways to specify option groups. The flags package also has basic support for commands. Commands are often used in monolithic applications that support various commands or actions. Take git for example, all of the add, commit, checkout, etc. are called commands. Using commands you can easily separate multiple functions of your application. There are currently two ways to specify a command. The most common, idiomatic way to implement commands is to define a global parser instance and implement each command in a separate file. These command files should define a go init function which calls AddCommand on the global parser. When parsing ends and there is an active command and that command implements the Commander interface, then its Execute method will be run with the remaining command line arguments. Command structs can have options which become valid to parse after the command has been specified on the command line, in addition to the options of all the parent commands. I.e. considering a -v flag on the parser and an add command, the following are equivalent: However, if the -v flag is defined on the add command, then the first of the two examples above would fail since the -v flag is not defined before the add command. go-flags has builtin support to provide bash completion of flags, commands and argument values. To use completion, the binary which uses go-flags can be invoked in a special environment to list completion of the current command line argument. It should be noted that this `executes` your application, and it is up to the user to make sure there are no negative side effects (for example from init functions). Setting the environment variable `GO_FLAGS_COMPLETION=1` enables completion by replacing the argument parsing routine with the completion routine which outputs completions for the passed arguments. The basic invocation to complete a set of arguments is therefore: where `completion-example` is the binary, `arg1` and `arg2` are the current arguments, and `arg3` (the last argument) is the argument to be completed. If the GO_FLAGS_COMPLETION is set to "verbose", then descriptions of possible completion items will also be shown, if there are more than 1 completion items. To use this with bash completion, a simple file can be written which calls the binary which supports go-flags completion: Completion requires the parser option PassDoubleDash and is therefore enforced if the environment variable GO_FLAGS_COMPLETION is set. Customized completion for argument values is supported by implementing the flags.Completer interface for the argument value type. An example of a type which does so is the flags.Filename type, an alias of string allowing simple filename completion. A slice or array argument value whose element type implements flags.Completer will also be completed.
Farmhash is a successor to Cityhash (both from Google) Copyright (c) 2014 Google, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Converted from the original C++ source code by building it (on a Ubuntu based system with an Intel CPU). Then copying the build command and editing it to generate the output of the C pre-processor stage. This showed a version of what code was required. I then copied code from the original files to convert to Go in order to preserve original comments. Note: If you want to compare results between this Go library and the original then when building the C++ its important to build with -DFARMHASH_DEBUG=0 (or edit src/farmhash.cc and add a #define) otherwise the results are byte swapped for reasons I don't understand. Of course a byte swapped hash is still a hash. To test I wrote a small program in C++ to generate both hashes and results from internal routines to add to the test routines here in the Go version. This ensures these func's work the same as the C++ versions. To obey Go export rules some functions had their first character case changed. TODO: Sort out all Public vs private names & rationalise my use of prefixes (cc, mk, na) that I use to avoid clashes. TODO: Figure out how to hash incrementally to use with the Go standard hash package. TODO: More testing! Note: An earlier version was a more literal conversion and lots of functions passed a len parameter after every slice passed. Note: I'm sure others have already converted farmhash to Go but I'm improving my Go skills and wanted the experience.
Package into provides a contemporary type conversion library for Go 1.18+, leveraging generics to offer safe and flexible type conversions between basic Go types. Key Features: Basic Usage: File Structure: Each file is named after its target type (e.g., int32.go contains conversions to int32) and provides both generic and direct conversion functions. Performance Note: The generic functions use reflection and may have performance overhead. For performance-critical code, use the direct conversion functions (e.g., Float64ToInt32). For more information and examples, see: https://github.com/zenless-lab/into
Command gencodec generates marshaling methods for struct types. When gencodec is invoked on a directory and type name, it creates a Go source file containing JSON, YAML and TOML marshaling methods for the type. The generated methods add features which the standard json package cannot offer. The gencodec:"required" tag can be used to generate a presence check for the field. The generated unmarshaling method returns an error if a required field is missing. Other struct tags are carried over as is. The "json", "yaml", "toml" tags can be used to rename a field when marshaling. Example: An invocation of gencodec can specify an additional 'field override' struct from which marshaling type replacements are taken. If the override struct contains a field whose name matches the original type, the generated marshaling methods will use the overridden type and convert to and from the original field type. If the override struct contains a field F of type T, which does not exist in the original type, and the original type has a method named F with no arguments and return type assignable to T, the method is called by Marshal*. If there is a matching method F but the return type or arguments are unsuitable, an error is raised. In this example, the specialString type implements json.Unmarshaler to enforce additional parsing rules. When json.Unmarshal is used with type foo, the specialString unmarshaler will be used to parse the value of SpecialField. The result of foo.Func() is added to the result on marshaling under the key `id`. If the input on unmarshal contains a key `id` this field is ignored. Field types in the override struct must be trivially convertible to the original field type. gencodec's definition of 'convertible' is less restrictive than the usual rules defined in the Go language specification. The following conversions are supported: If the fields are directly assignable, no conversion is emitted. If the fields are convertible according to Go language rules, a simple conversion is emitted. Example input code: The generated code will contain: If the fields are of map or slice type and the element (and key) types are convertible, a simple loop is emitted. Example input code: The generated code is similar to this snippet:
Package glick provides a simple plug-in environment. The central feature of glick is the Library which contains example types for the input and output of each API on the system. Each of these APIs can have a number of "actions" upon them, for example a file conversion API may have one action for each of the file formats to be convereted. Using the Run() method of glick.Library, a given API/Action combination runs the code in a function of Go type Plugin. Although it is easy to create your own plugins, there are three types built-in: Remote Procedure Calls (RPC), simple URL fetch (URL) and OS commands (CMD). A number of sub-packages simplify the use of third-party libraries when providing further types of plugin. The mapping of which plugin code to run occurs at three levels: 1) Intialisation and set-up code for the application will establish the glick.Library using glick.New(), then add API specifications using RegAPI(), it may also add the application's base plugins using RegPlugin(). 2) The base set-up can be extended and overloaded using a JSON format configuration description (probaly held in a file) by calling the Config() method of glick.Library. This configuration process is extensible, using the AddConfigurator() method - see the glick/glpie or glick/glkit sub-pakages for examples. 3) Which plugin to use can also be set-up or overloaded at runtime within Run(). Each call to a plugin includes a Context (as described in https://blog.golang.org/context). This context can contain for example user details, which could be matched against a database to see if that user should be directed to one plugin for a given action, rather than another. It could also be used to wrap every plugin call by a particular user with some other code, for example to log or meter activity.