Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package docopt parses command-line arguments based on a help message. Given a conventional command-line help message, docopt processes the arguments. See https://github.com/docopt/docopt#help-message-format for a description of the help message format. This package exposes three different APIs, depending on the level of control required. The first, simplest way to parse your docopt usage is to just call: This will use os.Args[1:] as the argv slice, and use the default parser options. If you want to provide your own version string and args, then use: If the last parameter (version) is a non-empty string, it will be printed when --version is given in the argv slice. Finally, we can instantiate our own docopt.Parser which gives us control over how things like help messages are printed and whether to exit after displaying usage messages, etc. In particular, setting your own custom HelpHandler function makes unit testing your own docs with example command line invocations much more enjoyable. All three of these return a map of option names to the values parsed from argv, and an error or nil. You can get the values using the helpers, or just treat it as a regular map: Additionally, you can `Bind` these to a struct, assigning option values to the exported fields of that struct, all at once.
Package maps provides a client library for the Google Maps Web Service APIs. Please see https://developers.google.com/maps/documentation/webservices/ for an overview of the Maps Web Service API suite.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package gousb provides an low-level interface to attached USB devices. A Context manages all resources necessary for communicating with USB devices. Through the Context users can iterate over available USB devices. The USB standard defines a mechanism of discovering USB device functionality through descriptors. After the device is attached and initialized by the host stack, it's possible to retrieve its descriptor (the device descriptor). It contains elements such as product and vendor IDs, bus number and device number (address) on the bus. In gousb, the Device struct represents a USB device. The Device struct’s Desc field contains all known information about the device. Among other information in the device descriptor is a list of configuration descriptors, accessible through Device.Desc.Configs. The USB standard allows one physical USB device to switch between different sets of behaviors, or working modes, by selecting one of the offered configs (each device has at least one). This allows the same device to sometimes present itself as e.g. a 3G modem, and sometimes as a flash drive with the drivers for that 3G modem. Configs are mutually exclusive, each device can have only one active config at a time. Switching the active config performs a light-weight device reset. Each config in the device descriptor has a unique identification number. In gousb a device config needs to be selected through Device.Config(num). It returns a Config struct that represents the device in this particular configuration. The configuration descriptor is accessible through Config.Desc. A config descriptor determines the list of available USB interfaces on the device. Each interface is a virtual device within the physical USB device and its active config. There can be many interfaces active concurrently. Interfaces are enumerated sequentially starting from zero. Additionally, each interface comes with a number of alternate settings for the interface, which are somewhat similar to device configs, but on the interface level. Each interface can have only a single alternate setting active at any time. Alternate settings are enumerated sequentially starting from zero. In gousb an interface and its alternate setting can be selected through Config.Interface(num, altNum). The Interface struct is the representation of the claimed interface with a particular alternate setting. The descriptor of the interface is available through Interface.Setting. An interface with a particular alternate setting defines up to 30 data endpoints, each identified by a unique address. The endpoint address is a combination of endpoint number (1..15) and endpoint directionality (IN/OUT). IN endpoints have addresses 0x81..0x8f, while OUT endpoints 0x01..0x0f. An endpoint can be considered similar to a UDP/IP port, except the data transfers are unidirectional. Endpoints are represented by the Endpoint struct, and all defined endpoints can be obtained through the Endpoints field of the Interface.Setting. Each endpoint descriptor (EndpointDesc) defined in the interface's endpoint map includes information about the type of the endpoint: - endpoint address - endpoint number - direction: IN (device-to-host) or OUT (host-to-device) - transfer type: USB standard defines a few distinct data transfer types: --- bulk - high throughput, but no guaranteed bandwidth and no latency guarantees, --- isochronous - medium throughput, guaranteed bandwidth, some latency guarantees, --- interrupt - low throughput, high latency guarantees. The endpoint descriptor determines the type of the transfer that will be used. - maximum packet size: maximum number of bytes that can be sent or received by the device in a single USB transaction. and a few other less frequently used pieces of endpoint information. An IN Endpoint can be opened for reading through Interface.InEndpoint(epNum), while an OUT Endpoint can be opened for writing through Interface.OutEndpoint(epNum). An InEndpoint implements the io.Reader interface, an OutEndpoint implements the io.Writer interface. Both Reads and Writes will accept larger slices of data than the endpoint's maximum packet size, the transfer will be split into smaller USB transactions as needed. But using Read/Write size equal to an integer multiple of maximum packet size helps with improving the transfer performance. Apart from 15 possible data endpoints, each USB device also has a control endpoint. The control endpoint is present regardless of the current device config, claimed interfaces and their alternate settings. It makes a lot of sense, as the control endpoint is actually used, among others, to issue commands to switch the active config or select an alternate setting for an interface. Control commands are also often used to control the behavior of the device. There is no single standard for control commands though, and many devices implement their custom control command schema. Control commands can be issued through Device.Control(). For more information about USB protocol and handling USB devices, see the excellent "USB in a nutshell" guide: http://www.beyondlogic.org/usbnutshell/ This example demostrates the full API for accessing endpoints. It opens a device with a known VID/PID, switches the device to configuration #2, in that configuration it opens (claims) interface #3 with alternate setting #0. Within that interface setting it opens an IN endpoint number 6 and an OUT endpoint number 5, then starts copying data between them, This examples demonstrates the use of a few convenience functions that can be used in simple situations and with simple devices. It opens a device with a given VID/PID, claims the default interface (use the same config as currently active, interface 0, alternate setting 0) and tries to write 5 bytes of data to endpoint number 7.
Package gousb provides an low-level interface to attached USB devices. A Context manages all resources necessary for communicating with USB devices. Through the Context users can iterate over available USB devices. The USB standard defines a mechanism of discovering USB device functionality through descriptors. After the device is attached and initialized by the host stack, it's possible to retrieve its descriptor (the device descriptor). It contains elements such as product and vendor IDs, bus number and device number (address) on the bus. In gousb, the Device struct represents a USB device. The Device struct’s Desc field contains all known information about the device. Among other information in the device descriptor is a list of configuration descriptors, accessible through Device.Desc.Configs. The USB standard allows one physical USB device to switch between different sets of behaviors, or working modes, by selecting one of the offered configs (each device has at least one). This allows the same device to sometimes present itself as e.g. a 3G modem, and sometimes as a flash drive with the drivers for that 3G modem. Configs are mutually exclusive, each device can have only one active config at a time. Switching the active config performs a light-weight device reset. Each config in the device descriptor has a unique identification number. In gousb a device config needs to be selected through Device.Config(num). It returns a Config struct that represents the device in this particular configuration. The configuration descriptor is accessible through Config.Desc. A config descriptor determines the list of available USB interfaces on the device. Each interface is a virtual device within the physical USB device and its active config. There can be many interfaces active concurrently. Interfaces are enumerated sequentially starting from zero. Additionally, each interface comes with a number of alternate settings for the interface, which are somewhat similar to device configs, but on the interface level. Each interface can have only a single alternate setting active at any time. Alternate settings are enumerated sequentially starting from zero. In gousb an interface and its alternate setting can be selected through Config.Interface(num, altNum). The Interface struct is the representation of the claimed interface with a particular alternate setting. The descriptor of the interface is available through Interface.Setting. An interface with a particular alternate setting defines up to 30 data endpoints, each identified by a unique address. The endpoint address is a combination of endpoint number (1..15) and endpoint directionality (IN/OUT). IN endpoints have addresses 0x81..0x8f, while OUT endpoints 0x01..0x0f. An endpoint can be considered similar to a UDP/IP port, except the data transfers are unidirectional. Endpoints are represented by the Endpoint struct, and all defined endpoints can be obtained through the Endpoints field of the Interface.Setting. Each endpoint descriptor (EndpointDesc) defined in the interface's endpoint map includes information about the type of the endpoint: - endpoint address - endpoint number - direction: IN (device-to-host) or OUT (host-to-device) - transfer type: USB standard defines a few distinct data transfer types: --- bulk - high throughput, but no guaranteed bandwidth and no latency guarantees, --- isochronous - medium throughput, guaranteed bandwidth, some latency guarantees, --- interrupt - low throughput, high latency guarantees. The endpoint descriptor determines the type of the transfer that will be used. - maximum packet size: maximum number of bytes that can be sent or received by the device in a single USB transaction. and a few other less frequently used pieces of endpoint information. An IN Endpoint can be opened for reading through Interface.InEndpoint(epNum), while an OUT Endpoint can be opened for writing through Interface.OutEndpoint(epNum). An InEndpoint implements the io.Reader interface, an OutEndpoint implements the io.Writer interface. Both Reads and Writes will accept larger slices of data than the endpoint's maximum packet size, the transfer will be split into smaller USB transactions as needed. But using Read/Write size equal to an integer multiple of maximum packet size helps with improving the transfer performance. Apart from 15 possible data endpoints, each USB device also has a control endpoint. The control endpoint is present regardless of the current device config, claimed interfaces and their alternate settings. It makes a lot of sense, as the control endpoint is actually used, among others, to issue commands to switch the active config or select an alternate setting for an interface. Control commands are also often used to control the behavior of the device. There is no single standard for control commands though, and many devices implement their custom control command schema. Control commands can be issued through Device.Control(). For more information about USB protocol and handling USB devices, see the excellent "USB in a nutshell" guide: http://www.beyondlogic.org/usbnutshell/ This example demostrates the full API for accessing endpoints. It opens a device with a known VID/PID, switches the device to configuration #2, in that configuration it opens (claims) interface #3 with alternate setting #0. Within that interface setting it opens an IN endpoint number 6 and an OUT endpoint number 5, then starts copying data between them, This examples demonstrates the use of a few convenience functions that can be used in simple situations and with simple devices. It opens a device with a given VID/PID, claims the default interface (use the same config as currently active, interface 0, alternate setting 0) and tries to write 5 bytes of data to endpoint number 7.
Package gousb provides an low-level interface to attached USB devices. A Context manages all resources necessary for communicating with USB devices. Through the Context users can iterate over available USB devices. The USB standard defines a mechanism of discovering USB device functionality through descriptors. After the device is attached and initialized by the host stack, it's possible to retrieve its descriptor (the device descriptor). It contains elements such as product and vendor IDs, bus number and device number (address) on the bus. In gousb, the Device struct represents a USB device. The Device struct’s Desc field contains all known information about the device. Among other information in the device descriptor is a list of configuration descriptors, accessible through Device.Desc.Configs. The USB standard allows one physical USB device to switch between different sets of behaviors, or working modes, by selecting one of the offered configs (each device has at least one). This allows the same device to sometimes present itself as e.g. a 3G modem, and sometimes as a flash drive with the drivers for that 3G modem. Configs are mutually exclusive, each device can have only one active config at a time. Switching the active config performs a light-weight device reset. Each config in the device descriptor has a unique identification number. In gousb a device config needs to be selected through Device.Config(num). It returns a Config struct that represents the device in this particular configuration. The configuration descriptor is accessible through Config.Desc. A config descriptor determines the list of available USB interfaces on the device. Each interface is a virtual device within the physical USB device and its active config. There can be many interfaces active concurrently. Interfaces are enumerated sequentially starting from zero. Additionally, each interface comes with a number of alternate settings for the interface, which are somewhat similar to device configs, but on the interface level. Each interface can have only a single alternate setting active at any time. Alternate settings are enumerated sequentially starting from zero. In gousb an interface and its alternate setting can be selected through Config.Interface(num, altNum). The Interface struct is the representation of the claimed interface with a particular alternate setting. The descriptor of the interface is available through Interface.Setting. An interface with a particular alternate setting defines up to 30 data endpoints, each identified by a unique address. The endpoint address is a combination of endpoint number (1..15) and endpoint directionality (IN/OUT). IN endpoints have addresses 0x81..0x8f, while OUT endpoints 0x01..0x0f. An endpoint can be considered similar to a UDP/IP port, except the data transfers are unidirectional. Endpoints are represented by the Endpoint struct, and all defined endpoints can be obtained through the Endpoints field of the Interface.Setting. Each endpoint descriptor (EndpointDesc) defined in the interface's endpoint map includes information about the type of the endpoint: - endpoint address - endpoint number - direction: IN (device-to-host) or OUT (host-to-device) - transfer type: USB standard defines a few distinct data transfer types: --- bulk - high throughput, but no guaranteed bandwidth and no latency guarantees, --- isochronous - medium throughput, guaranteed bandwidth, some latency guarantees, --- interrupt - low throughput, high latency guarantees. The endpoint descriptor determines the type of the transfer that will be used. - maximum packet size: maximum number of bytes that can be sent or received by the device in a single USB transaction. and a few other less frequently used pieces of endpoint information. An IN Endpoint can be opened for reading through Interface.InEndpoint(epNum), while an OUT Endpoint can be opened for writing through Interface.OutEndpoint(epNum). An InEndpoint implements the io.Reader interface, an OutEndpoint implements the io.Writer interface. Both Reads and Writes will accept larger slices of data than the endpoint's maximum packet size, the transfer will be split into smaller USB transactions as needed. But using Read/Write size equal to an integer multiple of maximum packet size helps with improving the transfer performance. Apart from 15 possible data endpoints, each USB device also has a control endpoint. The control endpoint is present regardless of the current device config, claimed interfaces and their alternate settings. It makes a lot of sense, as the control endpoint is actually used, among others, to issue commands to switch the active config or select an alternate setting for an interface. Control commands are also often used to control the behavior of the device. There is no single standard for control commands though, and many devices implement their custom control command schema. Control commands can be issued through Device.Control(). For more information about USB protocol and handling USB devices, see the excellent "USB in a nutshell" guide: http://www.beyondlogic.org/usbnutshell/ This example demostrates the full API for accessing endpoints. It opens a device with a known VID/PID, switches the device to configuration #2, in that configuration it opens (claims) interface #3 with alternate setting #0. Within that interface setting it opens an IN endpoint number 6 and an OUT endpoint number 5, then starts copying data between them, This examples demonstrates the use of a few convenience functions that can be used in simple situations and with simple devices. It opens a device with a given VID/PID, claims the default interface (use the same config as currently active, interface 0, alternate setting 0) and tries to write 5 bytes of data to endpoint number 7.
sunspec is an API that allows navigation around SunSpec device maps. The key abstractions are: Array, Device, Model, Block and Point with Arrays being collection of Devices, Devices collections of Models and so on. At the lowest level are Points which provide for strongly typed access to blocks of 16 bit registers. The API is intended to be backed by different physical implementations, including:
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/BurntSushi/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package tailetc implements an total-memory-cache etcd v3 client implemented by tailing (watching) an entire etcd. The client maintains a complete copy of the etcd database in memory, with values decoded into Go objects. It presents a simplified transaction model that assumes that multiple writers will not be contending on keys. When you call tx.Get or tx.Put, your Tx records the current etcd global revision. When you tx.Commit, if some newer revision of any key touched by Tx is in etcd then the commit will fail. Contention failures are reported as ErrTxStale from tx.Commit. Failures may be reported sooner as tx.Get or tx.Put errors, but the tx error is sticky, that is, if you ignore those errors the eventual error from tx.Commit will have ErrTxStale in its error chain. The Tx.Commit method waits before successfully returning until DB has caught up with the etcd global revision of the transaction. This ensures that sequential happen in the strict database sequence. So if you serve a REST API from a single *etcd.DB instance, then the API will behave as users expect it to. If you know only one thing about this etcd client, know this: Everything else should follow a programmer's intuition for an in-memory map guarded by a RWMutex. We take advantage of etcd's watch model to maintain a *decoded* value cache of the entire database in memory. This means that querying a value is extremely cheap. Issuing tx.Get(key) involves no more work than holding a mutex read lock, reading from a map, and cloning the value. The etcd watch is then exposed to the user of *etcd.DB via the WatchFunc. This lets users maintain in-memory higher-level indexes into the etcd database that respond to external commits. WatchFunc is called while the database lock is held, so a WatchFunc implementation cannot synchronously issue transactions. The cache in etcd.DB is very careful not to copy objects both into and out of the cache so that users of the etcd.DB cannot get pointers directly into the memory inside the cache. This means over-copying. Users can control precisely how much copying is done, and how, by providing a CloneFunc implementation. The general ownership semantics are: objects are copied as soon as they are passed to etcd, and any memory returned by etcd is owned by the caller.
Package types implements concrete types for the dcrwallet JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v3). The types in this package map to the required parts of the protocol as discussed in the dcrjson documention To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlite.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqliteutil has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqliteutil package.
Package libvirt provides a Go binding to the libvirt C library Through conditional compilation it supports libvirt versions 1.2.0 onwards. This is done automatically, with no requirement to use magic Go build tags. If an API was not available in the particular version of libvirt this package was built against, an error will be returned with a code of ERR_NO_SUPPORT. This is the same code seen if using a new libvirt library to talk to an old libvirtd lacking the API, or if a hypervisor does not support a given feature, so an application can easily handle all scenarios together. The Go binding is a fairly direct mapping of the underling C API which seeks to maximise the use of the Go type system to allow strong compiler type checking. The following rules describe how APIs/constants are mapped from C to Go For structs, the 'vir' prefix and 'Ptr' suffix are removed from the name. e.g. virConnectPtr in C becomes 'Connect' in Go. For structs which are reference counted at the C level, it is neccessary to explicitly release the reference at the Go level. e.g. if a Go method returns a '* Domain' struct, it is neccessary to call 'Free' on this when no longer required. The use of 'defer' is recommended for this purpose If multiple goroutines are using the same libvirt object struct, it may not be possible to determine which goroutine should call 'Free'. In such scenarios each new goroutine should call 'Ref' to obtain a private reference on the underlying C struct. All goroutines can call 'Free' unconditionally with the final one causing the release of the C object. For methods, the 'vir' prefix and object name prefix are remove from the name. The C functions become methods with an object receiver. e.g. 'virDomainScreenshot' in C becomes 'Screenshot' with a 'Domain *' receiver. For methods which accept a 'unsigned int flags' parameter in the C level, the corresponding Go parameter will be a named type corresponding to the C enum that defines the valid flags. For example, the ListAllDomains method takes a 'flags ConnectListAllDomainsFlags' parameter. If there are not currently any flags defined for a method in the C API, then the Go method parameter will be declared as a "flags uint32". Callers should always pass the literal integer value 0 for such parameters, without forcing any specific type. This will allow compatibility with future updates to the libvirt-go binding which may replace the 'uint32' type with a enum type at a later date. For enums, the VIR_ prefix is removed from the name. The enums get a dedicated type defined in Go. e.g. the VIR_NODE_SUSPEND_TARGET_MEM enum constant in C, becomes NODE_SUSPEND_TARGET_MEM with a type of NodeSuspendTarget. Methods accepting or returning virTypedParameter arrays in C will map the parameters into a Go struct. The struct will contain two fields for each possible parameter. One boolean field with a suffix of 'Set' indicates whether the parameter has a value set, and the other custom typed field provides the parameter value. This makes it possible to distinguish a parameter with a default value of '0' from a parameter which is 0 because it isn't supported by the hypervisor. If the C API defines additional typed parameters, then the corresponding Go struct will be extended to have further fields. e.g. the GetMemoryStats method in Go (which is backed by virNodeGetMemoryStats in C) will return a NodeMemoryStats struct containing the typed parameter values. Every method that can fail will include an 'error' object as the last return value. This will be an instance of the Error struct if an error occurred. To check for specific libvirt error codes, it is neccessary to cast the error. To connect to libvirt
Package libvirt provides a Go binding to the libvirt C library Through conditional compilation it supports libvirt versions 1.2.0 onwards. This is done automatically, with no requirement to use magic Go build tags. If an API was not available in the particular version of libvirt this package was built against, an error will be returned with a code of ERR_NO_SUPPORT. This is the same code seen if using a new libvirt library to talk to an old libvirtd lacking the API, or if a hypervisor does not support a given feature, so an application can easily handle all scenarios together. The Go binding is a fairly direct mapping of the underling C API which seeks to maximise the use of the Go type system to allow strong compiler type checking. The following rules describe how APIs/constants are mapped from C to Go For structs, the 'vir' prefix and 'Ptr' suffix are removed from the name. e.g. virConnectPtr in C becomes 'Connect' in Go. For structs which are reference counted at the C level, it is neccessary to explicitly release the reference at the Go level. e.g. if a Go method returns a '* Domain' struct, it is neccessary to call 'Free' on this when no longer required. The use of 'defer' is recommended for this purpose If multiple goroutines are using the same libvirt object struct, it may not be possible to determine which goroutine should call 'Free'. In such scenarios each new goroutine should call 'Ref' to obtain a private reference on the underlying C struct. All goroutines can call 'Free' unconditionally with the final one causing the release of the C object. For methods, the 'vir' prefix and object name prefix are remove from the name. The C functions become methods with an object receiver. e.g. 'virDomainScreenshot' in C becomes 'Screenshot' with a 'Domain *' receiver. For methods which accept a 'unsigned int flags' parameter in the C level, the corresponding Go parameter will be a named type corresponding to the C enum that defines the valid flags. For example, the ListAllDomains method takes a 'flags ConnectListAllDomainsFlags' parameter. If there are not currently any flags defined for a method in the C API, then the Go method parameter will be declared as a "flags uint32". Callers should always pass the literal integer value 0 for such parameters, without forcing any specific type. This will allow compatibility with future updates to the libvirt-go binding which may replace the 'uint32' type with a enum type at a later date. For enums, the VIR_ prefix is removed from the name. The enums get a dedicated type defined in Go. e.g. the VIR_NODE_SUSPEND_TARGET_MEM enum constant in C, becomes NODE_SUSPEND_TARGET_MEM with a type of NodeSuspendTarget. Methods accepting or returning virTypedParameter arrays in C will map the parameters into a Go struct. The struct will contain two fields for each possible parameter. One boolean field with a suffix of 'Set' indicates whether the parameter has a value set, and the other custom typed field provides the parameter value. This makes it possible to distinguish a parameter with a default value of '0' from a parameter which is 0 because it isn't supported by the hypervisor. If the C API defines additional typed parameters, then the corresponding Go struct will be extended to have further fields. e.g. the GetMemoryStats method in Go (which is backed by virNodeGetMemoryStats in C) will return a NodeMemoryStats struct containing the typed parameter values. Every method that can fail will include an 'error' object as the last return value. This will be an instance of the Error struct if an error occurred. To check for specific libvirt error codes, it is neccessary to cast the error. To connect to libvirt
Package libvirt provides a Go binding to the libvirt C library Through conditional compilation it supports libvirt versions 1.2.0 onwards. This is done automatically, with no requirement to use magic Go build tags. If an API was not available in the particular version of libvirt this package was built against, an error will be returned with a code of ERR_NO_SUPPORT. This is the same code seen if using a new libvirt library to talk to an old libvirtd lacking the API, or if a hypervisor does not support a given feature, so an application can easily handle all scenarios together. The Go binding is a fairly direct mapping of the underling C API which seeks to maximise the use of the Go type system to allow strong compiler type checking. The following rules describe how APIs/constants are mapped from C to Go For structs, the 'vir' prefix and 'Ptr' suffix are removed from the name. e.g. virConnectPtr in C becomes 'Connect' in Go. For structs which are reference counted at the C level, it is neccessary to explicitly release the reference at the Go level. e.g. if a Go method returns a '* Domain' struct, it is neccessary to call 'Free' on this when no longer required. The use of 'defer' is recommended for this purpose If multiple goroutines are using the same libvirt object struct, it may not be possible to determine which goroutine should call 'Free'. In such scenarios each new goroutine should call 'Ref' to obtain a private reference on the underlying C struct. All goroutines can call 'Free' unconditionally with the final one causing the release of the C object. For methods, the 'vir' prefix and object name prefix are remove from the name. The C functions become methods with an object receiver. e.g. 'virDomainScreenshot' in C becomes 'Screenshot' with a 'Domain *' receiver. For methods which accept a 'unsigned int flags' parameter in the C level, the corresponding Go parameter will be a named type corresponding to the C enum that defines the valid flags. For example, the ListAllDomains method takes a 'flags ConnectListAllDomainsFlags' parameter. If there are not currently any flags defined for a method in the C API, then the Go method parameter will be declared as a "flags uint32". Callers should always pass the literal integer value 0 for such parameters, without forcing any specific type. This will allow compatibility with future updates to the libvirt-go binding which may replace the 'uint32' type with a enum type at a later date. For enums, the VIR_ prefix is removed from the name. The enums get a dedicated type defined in Go. e.g. the VIR_NODE_SUSPEND_TARGET_MEM enum constant in C, becomes NODE_SUSPEND_TARGET_MEM with a type of NodeSuspendTarget. Methods accepting or returning virTypedParameter arrays in C will map the parameters into a Go struct. The struct will contain two fields for each possible parameter. One boolean field with a suffix of 'Set' indicates whether the parameter has a value set, and the other custom typed field provides the parameter value. This makes it possible to distinguish a parameter with a default value of '0' from a parameter which is 0 because it isn't supported by the hypervisor. If the C API defines additional typed parameters, then the corresponding Go struct will be extended to have further fields. e.g. the GetMemoryStats method in Go (which is backed by virNodeGetMemoryStats in C) will return a NodeMemoryStats struct containing the typed parameter values. Every method that can fail will include an 'error' object as the last return value. This will be an instance of the Error struct if an error occurred. To check for specific libvirt error codes, it is neccessary to cast the error. To connect to libvirt
Package libvirt provides a Go binding to the libvirt C library Through conditional compilation it supports libvirt versions 1.2.0 onwards. This is done automatically, with no requirement to use magic Go build tags. If an API was not available in the particular version of libvirt this package was built against, an error will be returned with a code of ERR_NO_SUPPORT. This is the same code seen if using a new libvirt library to talk to an old libvirtd lacking the API, or if a hypervisor does not support a given feature, so an application can easily handle all scenarios together. The Go binding is a fairly direct mapping of the underling C API which seeks to maximise the use of the Go type system to allow strong compiler type checking. The following rules describe how APIs/constants are mapped from C to Go For structs, the 'vir' prefix and 'Ptr' suffix are removed from the name. e.g. virConnectPtr in C becomes 'Connect' in Go. For structs which are reference counted at the C level, it is neccessary to explicitly release the reference at the Go level. e.g. if a Go method returns a '* Domain' struct, it is neccessary to call 'Free' on this when no longer required. The use of 'defer' is recommended for this purpose If multiple goroutines are using the same libvirt object struct, it may not be possible to determine which goroutine should call 'Free'. In such scenarios each new goroutine should call 'Ref' to obtain a private reference on the underlying C struct. All goroutines can call 'Free' unconditionally with the final one causing the release of the C object. For methods, the 'vir' prefix and object name prefix are remove from the name. The C functions become methods with an object receiver. e.g. 'virDomainScreenshot' in C becomes 'Screenshot' with a 'Domain *' receiver. For methods which accept a 'unsigned int flags' parameter in the C level, the corresponding Go parameter will be a named type corresponding to the C enum that defines the valid flags. For example, the ListAllDomains method takes a 'flags ConnectListAllDomainsFlags' parameter. If there are not currently any flags defined for a method in the C API, then the Go method parameter will be declared as a "flags uint32". Callers should always pass the literal integer value 0 for such parameters, without forcing any specific type. This will allow compatibility with future updates to the libvirt-go binding which may replace the 'uint32' type with a enum type at a later date. For enums, the VIR_ prefix is removed from the name. The enums get a dedicated type defined in Go. e.g. the VIR_NODE_SUSPEND_TARGET_MEM enum constant in C, becomes NODE_SUSPEND_TARGET_MEM with a type of NodeSuspendTarget. Methods accepting or returning virTypedParameter arrays in C will map the parameters into a Go struct. The struct will contain two fields for each possible parameter. One boolean field with a suffix of 'Set' indicates whether the parameter has a value set, and the other custom typed field provides the parameter value. This makes it possible to distinguish a parameter with a default value of '0' from a parameter which is 0 because it isn't supported by the hypervisor. If the C API defines additional typed parameters, then the corresponding Go struct will be extended to have further fields. e.g. the GetMemoryStats method in Go (which is backed by virNodeGetMemoryStats in C) will return a NodeMemoryStats struct containing the typed parameter values. Every method that can fail will include an 'error' object as the last return value. This will be an instance of the Error struct if an error occurred. To check for specific libvirt error codes, it is neccessary to cast the error. To connect to libvirt
Package libvirt provides a Go binding to the libvirt C library Through conditional compilation it supports libvirt versions 1.2.0 onwards. This is done automatically, with no requirement to use magic Go build tags. If an API was not available in the particular version of libvirt this package was built against, an error will be returned with a code of ERR_NO_SUPPORT. This is the same code seen if using a new libvirt library to talk to an old libvirtd lacking the API, or if a hypervisor does not support a given feature, so an application can easily handle all scenarios together. The Go binding is a fairly direct mapping of the underling C API which seeks to maximise the use of the Go type system to allow strong compiler type checking. The following rules describe how APIs/constants are mapped from C to Go For structs, the 'vir' prefix and 'Ptr' suffix are removed from the name. e.g. virConnectPtr in C becomes 'Connect' in Go. For structs which are reference counted at the C level, it is neccessary to explicitly release the reference at the Go level. e.g. if a Go method returns a '* Domain' struct, it is neccessary to call 'Free' on this when no longer required. The use of 'defer' is recommended for this purpose If multiple goroutines are using the same libvirt object struct, it may not be possible to determine which goroutine should call 'Free'. In such scenarios each new goroutine should call 'Ref' to obtain a private reference on the underlying C struct. All goroutines can call 'Free' unconditionally with the final one causing the release of the C object. For methods, the 'vir' prefix and object name prefix are remove from the name. The C functions become methods with an object receiver. e.g. 'virDomainScreenshot' in C becomes 'Screenshot' with a 'Domain *' receiver. For methods which accept a 'unsigned int flags' parameter in the C level, the corresponding Go parameter will be a named type corresponding to the C enum that defines the valid flags. For example, the ListAllDomains method takes a 'flags ConnectListAllDomainsFlags' parameter. If there are not currently any flags defined for a method in the C API, then the Go method parameter will be declared as a "flags uint32". Callers should always pass the literal integer value 0 for such parameters, without forcing any specific type. This will allow compatibility with future updates to the libvirt-go binding which may replace the 'uint32' type with a enum type at a later date. For enums, the VIR_ prefix is removed from the name. The enums get a dedicated type defined in Go. e.g. the VIR_NODE_SUSPEND_TARGET_MEM enum constant in C, becomes NODE_SUSPEND_TARGET_MEM with a type of NodeSuspendTarget. Methods accepting or returning virTypedParameter arrays in C will map the parameters into a Go struct. The struct will contain two fields for each possible parameter. One boolean field with a suffix of 'Set' indicates whether the parameter has a value set, and the other custom typed field provides the parameter value. This makes it possible to distinguish a parameter with a default value of '0' from a parameter which is 0 because it isn't supported by the hypervisor. If the C API defines additional typed parameters, then the corresponding Go struct will be extended to have further fields. e.g. the GetMemoryStats method in Go (which is backed by virNodeGetMemoryStats in C) will return a NodeMemoryStats struct containing the typed parameter values. Every method that can fail will include an 'error' object as the last return value. This will be an instance of the Error struct if an error occurred. To check for specific libvirt error codes, it is neccessary to cast the error. To connect to libvirt
Package ivsrealtime provides the API client, operations, and parameter types for Amazon Interactive Video Service RealTime. The Amazon Interactive Video Service (IVS) real-time API is REST compatible, using a standard HTTP API and an AWS EventBridge event stream for responses. JSON is used for both requests and responses, including errors. Key Concepts Stage — A virtual space where participants can exchange video in real time. Participant token — A token that authenticates a participant when they join a stage. Participant object — Represents participants (people) in the stage and contains information about them. When a token is created, it includes a participant ID; when a participant uses that token to join a stage, the participant is associated with that participant ID. There is a 1:1 mapping between participant tokens and participants. For server-side composition: Composition process — Composites participants of a stage into a single video and forwards it to a set of outputs (e.g., IVS channels). Composition operations support this process. Composition — Controls the look of the outputs, including how participants are positioned in the video. For more information about your IVS live stream, also see Getting Started with Amazon IVS Real-Time Streaming. A tag is a metadata label that you assign to an AWS resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Best practices and strategies in Tagging AWS Resources and Tag Editor for details, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS stages has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your AWS resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags). The Amazon IVS real-time API has these tag-related operations: TagResource, UntagResource, and ListTagsForResource. The following resource supports tagging: Stage. At most 50 tags can be applied to a resource.
Package sessions provides cookie and filesystem sessions and infrastructure for custom session backends. The key features are: Let's start with an example that shows the sessions API in a nutshell: First we initialize a session store calling NewCookieStore() and passing a secret key used to authenticate the session. Inside the handler, we call store.Get() to retrieve an existing session or a new one. Then we set some session values in session.Values, which is a map[interface{}]interface{}. And finally we call session.Save() to save the session in the response. Note that in production code, we should check for errors when calling session.Save(r, w), and either display an error message or otherwise handle it. Save must be called before writing to the response, otherwise the session cookie will not be sent to the client. That's all you need to know for the basic usage. Let's take a look at other options, starting with flash messages. Flash messages are session values that last until read. The term appeared with Ruby On Rails a few years back. When we request a flash message, it is removed from the session. To add a flash, call session.AddFlash(), and to get all flashes, call session.Flashes(). Here is an example: Flash messages are useful to set information to be read after a redirection, like after form submissions. There may also be cases where you want to store a complex datatype within a session, such as a struct. Sessions are serialised using the encoding/gob package, so it is easy to register new datatypes for storage in sessions: As it's not possible to pass a raw type as a parameter to a function, gob.Register() relies on us passing it a value of the desired type. In the example above we've passed it a pointer to a struct and a pointer to a custom type representing a map[string]interface. (We could have passed non-pointer values if we wished.) This will then allow us to serialise/deserialise values of those types to and from our sessions. Note that because session values are stored in a map[string]interface{}, there's a need to type-assert data when retrieving it. We'll use the Person struct we registered above: By default, session cookies last for a month. This is probably too long for some cases, but it is easy to change this and other attributes during runtime. Sessions can be configured individually or the store can be configured and then all sessions saved using it will use that configuration. We access session.Options or store.Options to set a new configuration. The fields are basically a subset of http.Cookie fields. Let's change the maximum age of a session to one week: Sometimes we may want to change authentication and/or encryption keys without breaking existing sessions. The CookieStore supports key rotation, and to use it you just need to set multiple authentication and encryption keys, in pairs, to be tested in order: New sessions will be saved using the first pair. Old sessions can still be read because the first pair will fail, and the second will be tested. This makes it easy to "rotate" secret keys and still be able to validate existing sessions. Note: for all pairs the encryption key is optional; set it to nil or omit it and and encryption won't be used. Multiple sessions can be used in the same request, even with different session backends. When this happens, calling Save() on each session individually would be cumbersome, so we have a way to save all sessions at once: it's sessions.Save(). Here's an example: This is possible because when we call Get() from a session store, it adds the session to a common registry. Save() uses it to save all registered sessions.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface (and the augmented redis.ConnWithTimeout one too), so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface (that also implements redis.ConnWithTimeout), with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package harmony provides an interface to the Discord API (https://discord.com/developers/docs/intro). The first thing you do with Harmony is to create a Client. NewClient does just that by returning a new Client pre-configured with sain defaults which should work fine in most cases. However, should you need a more specific configuration, you can always tweak it with optional `ClientOption`s. See the documentation of NewClient and the ClientOption type for more information on how to do so. Once you have a Client, you can start interacting with the Discord API, but some methods (such as event handlers) won't be available until you connect to Discord's Gateway. You can do so by simply calling the Connect method of the Client: It is only when successfully connected to the Gateway that your bot will appear as online and your Client will be able to receive events and send messages. Harmony's HTTP API is organized by resource. A resource maps to a core concept in the Discord world, such as a User or a Channel. Here is the list of resources you can interact with: Every interaction you can have with a resource can be accessed via methods attached to it. For example, if you wish to send a message to a channel, first access to the desired channel resource, then send the message: Endpoints that do not fall into one of those resource (creating a Guild for example, or getting valid Voice Regions) are directly available on the Client. To receive messages, use the OnMessageCreate method and give it your handler. It will be called each time a message is sent to a channel your bot is in with the message as a parameter. To register handlers for other types of events, see Client.On* methods. Note that your handlers are called in their own goroutine, meaning whatever you do inside of them won't block future events. When connecting to Discord, a session state is created with initial data sent by Discord's Gateway. As events are received by the client, this state is constantly updated so it always have the newest data available. This session state acts as a cache to avoid making requests over the HTTP API each time. If you need to get information about the current user, you can simply query the current state like so: Because this state might become memory hungry for bots that are in a very large number of servers, you can fine-tune events you want to track with the WithGatewayIntents option. State can also be completely disabled using the WithStateTracking option while creating the harmony client.
Package config is an encoding-agnostic configuration abstraction. It supports merging multiple configuration files, expanding environment variables, and a variety of other small niceties. It currently supports YAML, but may be extended in the future to support more restrictive encodings like JSON or TOML. It's often convenient to separate configuration into multiple files; for example, an application may want to first load some universally-applicable configuration and then merge in some environment-specific overrides. This package supports this pattern in a variety of ways, all of which use the same merge logic. Simple types (numbers, strings, dates, and anything else YAML would consider a scalar) are merged by replacing lower-priority values with higher-priority overrides. For example, consider this merge of base.yaml and override.yaml: Slices, arrays, and anything else YAML would consider a sequence are also replaced. Again merging base.yaml and override.yaml: Maps are recursively deep-merged, handling scalars and sequences as described above. Consider a merge between a more complex set of YAML files: In all cases, explicit nils (represented in YAML with a tilde) override any pre-existing configuration. For example, By default, the NewYAML constructor enables gopkg.in/yaml.v2's strict unmarshalling mode. This prevents a variety of common programmer errors, especially when deep-merging loosely-typed YAML files. In strict mode, providers throw errors if keys are duplicated in the same configuration source, all keys aren't used when populating a struct, or a merge encounters incompatible data types. This behavior can be disabled with the Permissive option. To maintain backward compatibility, all other constructors default to permissive unmarshalling. YAML allows strings to appear quoted or unquoted, so these two lines are identical: However, the YAML specification special-cases some unquoted strings. Most obviously, true and false are interpreted as Booleans (unless quoted). Less obviously, yes, no, on, off, and many variants of these words are also treated as Booleans (see http://yaml.org/type/bool.html for the complete specification). Correctly deep-merging sources requires this package to unmarshal and then remarshal all YAML, which implicitly converts these special-cased unquoted strings to their canonical representation. For example, Quoting special-cased strings prevents this surprising behavior. Unfortunately, this package was released with a variety of bugs and an overly large API. The internals of the configuration provider have been completely reworked and all known bugs have been addressed, but many duplicative exported functions were retained to preserve backward compatibility. New users should rely on the NewYAML constructor. In particular, avoid NewValue - it's unnecessary, complex, and may panic. Deprecated functions are documented in the format expected by the staticcheck linter, available at https://staticcheck.io/.
Package xirho implements an iterated function system fractal art renderer. An iterated function system is a collection of functions from points to points. Starting with a randomly selected point, we choose a function at random, apply that function to the point, and plot its new location, then repeat ad infinitum. With some additional steps, the result images can be stunning. The mathematical terminology used in xirho's documentation and API is as follows. A point is an element of R³ × [0, 1], i.e. a 3D point plus a color coordinate. A function, sometimes function type, is a procedure which maps points to points, possibly using additional fixed parameters to control the exact mapping. (Other IFS implementations typically refer to functions in this sense as variations.) A node is a particular instance of a function and its fixed parameters. An iterated function system, or just system, is a non-empty list of nodes, a Markov chain giving the probability of the algorithm transitioning from each node in the list to each other node in the list, an additional node applied to each output point to serve as a possibly nonlinear camera, and a mapping of color coordinates to colors. The Markov chain of a system may also be called the weights graph, or just the graph. Xirho does not include a designer to produce systems to render. Existing parameters can be loaded through the encoding and encoding/flame subpackages, or programmed by hand. To use xirho to render a system, create a Render containing the System and a Hist to plot points, then call its Render method with a non-trivial context. (The context closing is the only way that Render returns.) Alternatively, the RenderAsync method provides an API to manage rendering concurrently, e.g. to support a UI. For fine-grained control of the rendering process, the System.Iter method can be used directly.
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface, so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface, with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package freeze enables the "freezing" of data, similar to JavaScript's Object.freeze(). A frozen object cannot be modified; attempting to do so will result in an unrecoverable panic. Freezing is useful for providing soft guarantees of immutability. That is: the compiler can't prevent you from mutating an frozen object, but the runtime can. One of the unfortunate aspects of Go is its limited support for constants: structs, slices, and even arrays cannot be declared as consts. This becomes a problem when you want to pass a slice around to many consumers without worrying about them modifying it. With freeze, you can guard against these unwanted or intended behaviors. To accomplish this, the mprotect syscall is used. Sadly, this necessitates allocating new memory via mmap and copying the data into it. This performance penalty should not be prohibitive, but it's something to be aware of. In case it wasn't clear from the previous paragraph, this package is not intended to be used in production. A well-designed API is a much saner solution than freezing your data structures. I would even caution against using freeze in your automated testing, due to its platform-specific nature. freeze is best used for "one-off" debugging. Something like this: 1. Observe bug 2. Suspect that shared mutable data is the culprit 3. Call freeze.Object on the data after it is created 4. Run program again; it crashes 5. Inspect stack trace to identify where the data was modified 6. Fix bug 7. Remove call to freeze.Object Again: do not use freeze in production. It's a cool proof-of-concept, and it can be useful for debugging, but that's about it. Let me put it another way: freeze imports four packages: reflect, runtime, unsafe, and syscall (actually golang.org/x/sys/unix). Does that sound like a package you want to depend on? Okay, back to the real documention: Functions are provided for freezing the three "pointer types:" Pointer, Slice, and Map. Each function returns a copy of their input that is backed by protected memory. In addition, Object is provided for freezing recursively. Given a slice of pointers, Object will prevent modifications to both the pointer data and the slice data, while Slice merely does the latter. To freeze an object: Note that since foo does not contain any pointers, calling Pointer(f) would have the same effect here. It is recommended that, where convenient, you reassign the return value to its original variable, as with append. Otherwise, you will retain both the mutable original and the frozen copy. Likewise, to freeze a slice: Interfaces can also be frozen, since internally they are just pointers to objects. The effect of this is that the interface's pure methods can still be called, but impure methods cannot. Unfortunately the impurity of a given method is defined by the implementation, not the interface. Even a String method could conceivably modify some internal state. Furthermore, the caveat about unexported struct fields (see below) applies here, so many exported objects cannot be completely frozen. This package depends heavily on the internal representations of the slice and map types. These objects are not likely to change, but if they do, this package will break. In general, you can't call Object on the same object twice. This is because Object will attempt to rewrite the object's internal pointers -- which is a memory modification. Calling Pointer or Slice twice should be fine. Object cannot descend into unexported struct fields. It can still freeze the field itself, but if the field contains a pointer, the data it points to will not be frozen. Appending to a frozen slice will trigger a panic iff len(slice) < cap(slice). This is because appending to a full slice will allocate new memory. Unix is the only supported platform. Windows support is not planned, because it doesn't support a syscall analogous to mprotect.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package gorilla/pat is a request router and dispatcher with a pat-like interface. It is an alternative to gorilla/mux that showcases how it can be used as a base for different API flavors. Package pat is documented at: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming GET request matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Note: gorilla/pat matches path prefixes, so you must register the most specific paths first. Note: differently from pat, these methods accept a handler function, and not an http.Handler. We think this is shorter and more convenient. To set an http.Handler, use the Add() method. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: The names are used to create a map of route variables which are stored in the URL query, prefixed by a colon: As in the gorilla/mux package, other matchers can be added to the registered routes and URLs can be reversed as well. To build a URL for a route, first add a name to it: Then you can get it using the name and generate a URL: ...and the result will be a url.URL with the following path: Check the mux documentation for more details about URL building and extra matchers:
Package nject is a general purpose dependency injection framework. It provides wrapping, pruning, and indirect variable passing. It is type safe and using it requires no type assertions. There are two main injection APIs: Run and Bind. Bind is designed to be used at program initialization and does as much work as possible then rather than during main execution. The API for nject is a list of providers (injectors) that are run in order. The final function in the list must be called. The other functions are called if their value is consumed by a later function that must be called. Here is a simple example: In this example, context.Background and log.Default are not invoked because their outputs are not used by the final function (http.ListenAndServe). The basic idea of nject is to assemble a Collection of providers and then use that collection to supply inputs for functions that may use some or all of the provided types. One big win from dependency injection with nject is the ability to reshape various different functions into a single signature. For example, having a bunch of functions with different APIs all bound as http.HandlerFunc is easy. Providers produce or consume data. The data is distinguished by its type. If you want to three different strings, then define three different types: Then you can have a function that does things with the three types: The above function would be a valid injector or final function in a provider Collection. For example: This creates a sequence and executes it. Run injects a myFirst value and the sequence of providers runs: genSecond() injects a mySecond and myStringFunc() combines the myFirst and mySecond to create a myThird. Then the function given in run saves that final value. The expected output is Providers are grouped as into linear sequences. When building an injection chain, the providers are grouped into several sets: LITERAL, STATIC, RUN. The LITERAL and STATIC sets run once per initialization. The RUN set runs once per invocation. Providers within a set are executed in the order that they were originally specified. Providers whose outputs are not consumed are omitted unless they are marked Required(). Collections are bound with Bind(&invocationFunction, &initializationFunction). The invocationFunction is expected to be used over and over, but the initializationFunction is expected to be used less frequently. The STATIC set is re-invoked each time the initialization function is run. The LITERAL set is just the literal values in the collection. The STATIC set is composed of the cacheable injectors. The RUN set if everything else. All injectors have the following type signature: None of the input or output parameters may be anonymously-typed functions. An anoymously-typed function is a function without a named type. Injectors whose output values are not used by a downstream handler are dropped from the handler chain. They are not invoked. Injectors that have no output values are a special case and they are always retained in the handler chain. In injector that is annotated as Cacheable() may promoted to the STATIC set. An injector that is annotated as MustCache() must be promoted to the STATIC set: if it cannot be promoted then the collection is deemed invalid. An injector may not be promoted to the STATIC set if it takes as input data that comes from a provider that is not in the STATIC or LITERAL sets. For example, arguments to the invocation function, if the invoke function takes an int as one of its inputs, then no injector that takes an int as an argument may be promoted to the STATIC set. Injectors in the STATIC set will be run exactly once per set of input values. If the inputs are consistent, then the output will be a singleton. This is true across injection chains. If the following provider is used in multiple chains, as long as the same integer is injected, all chains will share the same pointer. Injectors in the STATIC set are only run for initialization. For some things, like opening a database, that may still be too often. Injectors that are marked Memoized must be promoted to the static set. Memoized injectors are only run once per combination of inputs. Their outputs are remembered. If called enough times with different arguments, memory will be exhausted. Memoized injectors may not have more than 90 inputs. Memoized injectors may not have any inputs that are go maps, slices, or functions. Arrays, structs, and interfaces are okay. This requirement is recursive so a struct that that has a slice in it is not okay. Fallible injectors are special injectors that change the behavior of the injection chain if they return error. Fallible injectors in the RUN set, that return error will terminate execution of the injection chain. A non-wrapper function that returns nject.TerminalError is a fallible injector. The TerminalError does not have to be the last return value. The nject package converts TerminalError objects into error objects so only the fallible injector should use TerminalError. Anything that consumes the TerminalError should do so by consuming error instead. Fallible injectors can be in both the STATIC set and the RUN set. Their behavior is a bit different. If a non-nil value is returned as the TerminalError from a fallible injector in the RUN set, none of the downstream providers will be called. The provider chain returns from that point with the TerminalError as a return value. Since all return values must be consumed by a middleware provider or the bound invoke function, fallible injectors must come downstream from a middleware handler that takes error as a returned value if the invoke function (function that runs a bound injection chain) does not return error. If a fallible injector returns nil for the TerminalError, the other output values are made available for downstream handlers to consume. The other output values are not considered return values and are not available to be consumed by upstream middleware handlers. The error returned by a fallible injector is not available downstream. If a non-nil value is returned as the TerminalError from a fallible injector in the STATIC set, the rest of the STATIC set will be skipped. If there is an init function and it returns error, then the value returned by the fallible injector will be returned via init function. Unlike fallible injectors in the RUN set, the error output by a fallible injector in the STATIC set is available downstream (but only in the RUN set -- nothing else in the STATIC set will execute). Some examples: A wrap function interrupts the linear sequence of providers. It may or may invoke the remainder of the sequence that comes after it. The remainder of the sequence is provided to the wrap function as a function that it may call. The type signature of a wrap function is a function that receives an function as its first parameter. That function must be of an anonymous type: For example: When this wrappper function runs, it is responsible for invoking the rest of the provider chain. It does this by calling inner(). The parameters to inner are available as inputs to downstream providers. The value(s) returned by inner come from the return values of other wrapper functions and from the return value(s) of the final function. Wrap functions can call inner() zero or more times. The values returned by wrap functions must be consumed by another upstream wrap function or by the init function (if using Bind()). Wrap functions have a small amount of runtime overhead compared to other kinds of functions: one call to reflect.MakeFunc(). Wrap functions serve the same role as middleware, but are usually easier to write. Wrap functions that invoke inner() multiple times in parallel are are not well supported at this time and such invocations must have the wrap function decorated with Parallel(). Final functions are simply the last provider in the chain. They look like regular Go functions. Their input parameters come from other providers. Their return values (if any) must be consumed by an upstream wrapper function or by the init function (if using Bind()). Wrap functions that return error should take error as a returned value so that they do not mask a downstream error. Wrap functions should not return TerminalError because they internally control if the downstream chain is called. Literal values are values in the provider chain that are not functions. Provider chains can be invalid for many reasons: inputs of a type not provided earlier in the chain; annotations that cannot be honored (eg. MustCache & Memoize); return values that are not consumed; functions that take or return functions with an anymous type other than wrapper functions; A chain that does not terminate with a function; etc. Bind() and Run() will return error when presented with an invalid provider chain. Bind() and Run() will return error rather than panic. After Bind()ing an init and invoke function, calling them will not panic unless a provider panic()s A wrapper function can be used to catch panics and turn them into errors. When doing that, it is important to propagate any errors that are coming up the chain. If there is no guaranteed function that will return error, one can be added with Shun(). Bind() uses a complex and somewhat expensive O(n^2) set of rules to evaluate which providers should be included in a chain and which can be dropped. The goal is to keep the ones you want and remove the ones you don't want. Bind() tries to figure this out based on the dependencies and the annotations. MustConsume, not Desired: Only include if at least one output is transitively consumed by a Required or Desired chain element and all outputs are consumed by some other provider. Not MustConsume, not Desired: only include if at least one output is transitively consumed by a Required or Desired provider. Not MustConsume, Desired: Include if all inputs are available. MustConsume, Desired: Only include if all outputs are transitively consumed by a required or Desired chain element. When there are multiple providers of a type, Bind() tries to get it from the closest provider. Providers that have unmet dependencies will be eliminated from the chain unless they're Required. The remainder of this document consists of suggestions for how to use nject. Contributions to this section would be welcome. Also links to blogs or other discussions of using nject in practice. The best practice for using nject inside a large project is to have a few common chains that everyone imports. Most of the time, these common chains will be early in the sequence of providers. Customization of the import chains happens in many places. This is true for services, libraries, and tests. For tests, a wrapper that includes the standard chain makes it easier to write tests. See github.com/memsql/ntest for helper functions and more examples. If nject cannot bind or run a chain, it will return error. The returned error is generally very good, but it does not contain the full debugging output. The full debugging output can be obtained with the DetailedError function. If the detailed error shows that nject has a bug, note that part of the debug output includes a regression test that can be turned into an nject issue. Remove the comments to hide the original type names. The Reorder() decorator allows injection chains to be fully or partially reordered. Reorder is currently limited to a single pass and does not know which injectors are ultimately going to be included in the final chain. It is likely that if you mark your entire chain with Reorder, you'll have unexpected results. On the other hand, Reorder provides safe and easy way to solve some common problems. For example: providing optional options to an injected dependency. Because the default options are marked as Shun, they'll only be included if they have to be included. If a user of thingChain wants to override the options, they simply need to mark their override as Reorder. To make this extra friendly, a helper function to do the override can be provided and used. Recommended best practice is to have injectors shutdown the things they themselves start. They should do their own cleanup. Inside tests, an injector can use t.Cleanup() for this. For services, something like t.Cleanup can easily be built: Alternatively, any wrapper function can do it's own cleanup in a defer that it defines. Wrapper functions have a small runtime performance penalty, so if you have more than a couple of providers that need cleanup, it makes sense to include something like CleaningService. The normal direction of forced inclusion is that an upstream provider is required because a downstream provider uses a type produced by the upstream provider. There are times when the relationship needs to be reversed. For example, a type gets modified by a downstream injector. The simplest option is to combine the providers into one function. Another possibility is to mark the upstream provider with MustConsume and have it produce a type that is only consumed by the downstream provider. Lastly, the providers can be grouped with Cluster so that they'll be included or excluded as a group. Example shows what gets included and what does not for several injection chains. These examples are meant to show the subtlety of what gets included and why. This example explores injecting a database handle or transaction only when they're used.
Package dngn is a simple random map generation library primarily made to be used for 2D games. It features a simple API, and a couple of different means to generate maps. The easiest way to kick things off when using dngn is to simply create a Layout to represent your overall game map, which can then be manipulated or have a Generate function run on it to actually generate the content on the map.
Package marshmallow provides a simple API to perform flexible and performant JSON unmarshalling. Unlike other packages, marshmallow supports unmarshalling of some known and some unknown fields with zero performance overhead nor extra coding needed. While unmarshalling, marshmallow allows fully retaining the original data and access it via a typed struct and a dynamic map. https://github.com/perimeterx/marshmallow
Package types implements concrete types for the dcrwallet JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://godoc.org/github.com/decred/dcrd/dcrjson). The types in this package map to the required parts of the protocol as discussed in the dcrjson documention To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package wiremock is client for WireMock API. WireMock is a simulator for HTTP-based APIs. Some might consider it a service virtualization tool or a mock server. HTTP request: With response: Stub: The client should reset all made stubs after tests: To avoid conflicts, you can delete individual rules: You can verify if a request has been made that matches the mapping.
Package z3 provides Go bindings to the Z3 SMT Solver. The bindings are a balance between idiomatic Go and being an obvious translation from the Z3 API so you can look up Z3 APIs and find the intuitive mapping in Go. The most foreign thing to Go programmers will be error handling. Rather than return the `error` type from almost every function, the z3 package mimics Z3's API by requiring you to set an error handler callback. This error handler will be invoked whenenver an error occurs. See ErrorHandler and Context.SetErrorHandler for more information.
Package hibe implements the cryptosystem described in the paper "Hierarchical Identity Based Encyprtion with Constant Size Ciphertext" by Boneh, Boyen, and Goh. The algorithms call for us to use a group G that is bilinear, i.e, there exists a bilinear map e: G x G -> G2. However, the bn256 library uses a slightly different definition of bilinear groups: it defines it as a triple of groups (G2, G1, GT) such that there exists a bilinear map e: G2 x G1 -> GT. The paper calls this an "asymmetric bilinear group". It turns out that we are lucky. Both G2 and G1, as implemented in bn256 share the same order, and that that order (bn256.Order) happens to be a prime number p. Therefore G2 and G1 are both isomorphic to Zp. This is important for two reasons. First, the algorithm requires G to be a cyclic group. Second, this implies that G2 and G1 are isomorphic to each other. This means that as long as we are careful, we can use this library to carry out a computation that is logically equivalent to the case where G2 and G1 happen to be the same group G. For simplicity, take G = G2. In other words, choose the G used in Boneh's algorithms to be the group G2 provided by bn256. In order for this work, we need to choose a single isomorphism phi: G2 -> G1 and stick with it for all operations. Let g1 be the base of G2, and g2 be the base of G1, as provided via the APIs of bn256. We define phi as follows: phi(g1 ^ a) = g2 ^ a, for all a in Z. This is well defined because G2 is isomorphic to Zp, a cyclic group. What this means is that, if we are working with some x in G to implement the algorithm, then we must do so using g1 ^ k in G2 and g2 ^ k in G1, where g1 ^ k = x. Using this method, we can emulate the requirements of Boneh's algorithm. Furthermore, note that a marshalled G1 element is 64 bytes, whereas a marshalled G2 element is 128 bytes. Therefore, we actually switch the order of arguments to the bilinear map e so that marshalled parameters and keys are smaller (since otherwise, more elements are passed as the secone argument and therefore take up a lot of space). Note that switching the order of arguments to a bilinear map (asymmetric or otherwise) maintains bilinearity. One more thing to note is that the group, as described in the paper, is multiplicative, whereas the bn256 library uses additive notation. Keep this in mind if you ever need to read through the code.
Package goBolt implements drivers for the Neo4J Bolt Protocol Versions 1-4. There are some limitations to the types of collections the internalDriver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/internalDriver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
Package zdb provides a nice API to interact with SQL databases in Go. All query functions (Exec, NumRows, InsertID Get, Select, Query) use named parameters (:name)used if params contains a map or struct; positional parameters (? or $1) are used if it doesn't. You can add multiple structs or maps, but mixing named and positional parameters is not allowed. Everything between {{:name ..}} is parsed as a conditional; for example {{:foo query}} will only be added if "foo" from params is true or not a zero type. Conditionals only work with named parameters. If the query starts with "load:" then it's loaded from the filesystem or embedded files; see Load() for details. Additional DumpArgs can be added to "dump" information to stderr for testing and debugging: Running the query twice for a select is usually safe (just slower), but running insert, update, or delete twice may cause problems.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behavior is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behavior of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.