Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package amboy provides basic infrastructure for running and describing tasks and task workflows with, potentially, minimal overhead and additional complexity. Amboy works with 4 basic logical objects: jobs, or descriptions of tasks; runnners, which are responsible for executing tasks; queues, that represent pipelines and offline workflows of tasks (e.g. not real time, processes that run outside of the primary execution path of a program); and dependencies that represent relationships between jobs. The inspiration for amboy was to be able to provide a unified way to define and run jobs, that would feel equally "native" for distributed applications and distributed web application, and move easily between different architectures. While amboy users will generally implement their own Job and dependency implementations, Amboy itself provides several example Queue implementations, as well as several generic examples and prototypes of Job and dependency.Manager objects. Generally speaking you should be able to use included amboy components to provide the queue and runner components, in conjunction with custom and generic job and dependency variations. Consider the following example: The amboy package proves a number of generic methods that, using the Queue.Stats() method, block until all jobs are complete. They provide different semantics, which may be useful in different circumstances. All of these functions wait until the total number of jobs submitted to the queue is equal to the number of completed jobs, and as a result these methods don't prevent other threads from adding jobs to the queue after beginning to wait. Additionally, there are a set of methods that allow callers to wait for a specific job to complete.
Package conf provides support for using environmental variables and command line arguments for configuration. It is compatible with the GNU extensions to the POSIX recommendations for command-line options. See http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html There are no hard bindings for this package. This package takes a struct value and parses it for both the environment and flags. It supports several tags to customize the flag options. The field name and any parent struct name will be used for the long form of the command name unless the name is overridden. As an example, using this config struct: The following usage information would be output you can display. Usage: conf.test [options] [arguments] OPTIONS There is an API called Parse that can process a config struct with environment variable and command line flag overrides. There is also YAML support using the yaml package that is part of this modeule. There is a WithParse function that takes a slice of bytes containing the YAML or WithParseReader that takes any concrete value that knows how to Read. Additionally, if the config struct has a field of the slice type conf.Args then it will be populated with any remaining arguments from the command line after flags have been processed. For example a program with a config struct like this: If that program is executed from the command line like this: Then the cfg.Args field will contain the string values ["serve", "http"]. The Args type has a method Num for convenient access to these arguments such as this: You can add a version with a description by adding the Version type to your config type and set these values at run time for display.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Meeus implements algorithms from the book "Astronomical Algorithms" by Jean Meeus. It follows the second edition, copyright 1998, with corrections as of August 10, 2009. It requires Go 1.1 or later. Jean Meeus's book has long been respected as a broad-reaching source of astronomical algorithms, and many code libraries have been based on it. This library will be distinct in several respects, I hope. First of all it is in the Go language, a programming language new enough that it well postdates the book itself. Go has many advantages for a large and diverse library such as this and it is a fine language for for scientific computations. I hope that a Go implementation will prove relevant for some time in the future. Next, this library attempts fairly comprehensive coverage of the book. Each chapter of the book is addressed, and in the very few cases where there seems no code from the chapter that is applicable in Go, similar and more appropriate techniques are at least discussed in documentation. If meaningful, examples are given as well. While this library attempts fairly comprehensive coverage of the book, it does not attempt to present a complete, well rounded, and polished astronomy library. Such a production-quality astronomy library would likely include some updated routines and data, routines and data from other sources, and would fill in various holes of functionality which Meeus elects to gloss over. Such a library could certainly be derived from this one, but it is beyond the scope of what is attempted here. Thus, this library should represent a solid foundation for the development of a broad range of astronomy software. Much software should be able to use this library directly. Some software will need routines from additional sources. When the API of this library begins to present friction with that of other code, it may be time to fork or otherwise derive a new library from this one. Please feel free to do this, respecting of course the MIT license under which this software is offered. By Go convention, each package is in its own subdirectory. The "subdirectories" list of this documentation page lists all packages of of the library. Each package also corresponds to exacly one chapter of the book. The package documentation heading references the chapter number and a cross reference is given below of chapter numbers and package names. Within a chapter of the book, Meeus presents explanatory text, numbered formulas, numbered examples, and other exercises. Within a package of this library, there are library functions and other codified definitions; there are Go examples which appear in documentation and which are also evaluated and verified to produce correct output by the go test feature; and there is test code which is neither part of the API nor the documentation but which verified by the go test feature. The "API", or choice of functions to implement in Go, covers many of Meeus's numbered formulas and covers the algorithms needed to work most of the numbered examples. The correspondence is not one-to-one, but often "refactored" into functions that seem more idiomatic to Go. This is set as the limit of the API however, and thus the limit of the functionality offered by this library. Each numbered example in the book is also translated to a Go example function. This typically shows how to use the implemented API to compute the results of the example. As the go test feature validates these results, the examples also serve as baseline tests of the correctness of the API code. Relevant "exercises" from the book are also often implemented as Go examples. A few packages remain incomplete. A package is considered complete if it implements all major formulas and algorithms and if it implements all numbered examples. For incomplete packages, the package documentation will describe the ways in which it is incomplete and typically give reasons for the incompleteness. In addition to the chapter packages, there is a package called "base". This contains a few definitions that are provided by Meeus but are of such general use that they really don't belong with any one chapter. The much greater bulk of base however, are functions which Meeus does not explicitly provide, but again are of general use. The nature of these functions is as helper subroutines or IO subroutines. The functions do not offer additional astronomy algorithms beyond those provided by Meeus. To more closely follow the book's use of Greek letters and other symbols, Unicode is freely used in the source code. Recognizing that these symbols are awkard to enter in many environments however, they are avoided for exported symbols that comprise the library API. The function Coord.EclToEq for example, returns (α, δ float64) but of course you can assign these return values to whatever variables you like. The struct Coord.Equatorial on the other hand, has exported fields RA and Dec. ASCII is used in this case to simplify using these symbols in your code. Some identifiers use the prime symbol (ʹ). That's Unicode U+02B9, not the ASCII '. Go uses ASCII ' for raw strings and does not allow it in identifiers. U+02B9 on the other hand is Unicode category Lm, and is perfectly valid in Go identifiers. An earler version of this library used the Go type float64 for most parameters and return values. This allowed terse, efficient code but required careful attention to the scaling or units used. Go defined types are now used for Time, RA, HourAngle, and general Angle quantities in the interest of making units and coversions more clear. These types are defined in the external package github.com/soniakeys/unit. An earlier version of this library included routines for formatting sexagesimal quantities. These have been moved to the external package github.com/soniakeys/sexagesimal and use of this package is now restricted to examples and tests. Meeus packages and the sexagesimal package both depend on the unit package. Meeus packages do not depend on sexagesimal, although the Meeus tests do. .
Package wasmtime is a WebAssembly runtime for Go powered by Wasmtime. This package provides everything necessary to compile and execute WebAssembly modules as part of a Go program. Wasmtime is a JIT compiler written in Rust, and can be found at https://github.com/bytecodealliance/wasmtime. This package is a binding to the C API provided by Wasmtime. The API of this Go package is intended to mirror the Rust API (https://docs.rs/wasmtime) relatively closely, so if you find something is under-documented here then you may have luck consulting the Rust documentation as well. As always though feel free to file any issues at https://github.com/bytecodealliance/wasmtime-go/issues/new. It's also worth pointing out that the authors of this package up to this point primarily work in Rust, so if you've got suggestions of how to make this package more idiomatic for Go we'd love to hear your thoughts! An example of a wasm module which calculates the GCD of two numbers An example of instantiating a small wasm module which imports functionality from the host, then calling into wasm which calls back into the host.
Package ssa/interp defines an interpreter for the SSA representation of Go programs. This interpreter is provided as an adjunct for testing the SSA construction algorithm. Its purpose is to provide a minimal metacircular implementation of the dynamic semantics of each SSA instruction. It is not, and will never be, a production-quality Go interpreter. The following is a partial list of Go features that are currently unsupported or incomplete in the interpreter. * Unsafe operations, including all uses of unsafe.Pointer, are impossible to support given the "boxed" value representation we have chosen. * The reflect package is only partially implemented. * The "testing" package is no longer supported because it depends on low-level details that change too often. * "sync/atomic" operations are not atomic due to the "boxed" value representation: it is not possible to read, modify and write an interface value atomically. As a consequence, Mutexes are currently broken. * recover is only partially implemented. Also, the interpreter makes no attempt to distinguish target panics from interpreter crashes. * the sizes of the int, uint and uintptr types in the target program are assumed to be the same as those of the interpreter itself. * all values occupy space, even those of types defined by the spec to have zero size, e.g. struct{}. This can cause asymptotic performance degradation. * os.Exit is implemented using panic, causing deferred functions to run.
Package errors is a flexible error support library for Go Go's standard library is intentionally sparse on providing error utilities, and developers coming from other programming languages may miss some features they took for granted [1]. This package is an attempt at providing those features in an idiomatic Go way. The main features this package provides (in addition to miscellaneous utilities) are: While Go has very deliberately not implemented class hierarchies, a quick perusal of Go's net and os packages should indicate that sometimes error hierarchies are useful. Go programmers should be familiar with the net.Error interface (and the types that fulfill it) as well as the os helper functions such as os.IsNotExist, os.IsPermission, etc. Unfortunately, to implement something similar, a developer will have to implement a struct that matches the error interface as well as any testing methods or any more detailed interfaces they may choose to export. It's not hard, but it is friction, and developers tend to use fmt.Errorf instead due to ease of use, thus missing out on useful features that functions like os.IsNotExist and friends provide. The errors package provides reusable components for building similar features while reducing friction as much as possible. With the errors package, the os error handling routines can be mimicked as follows: It doesn't take long during Go development before you may find yourself wondering where an error came from. In other languages, as soon as an error is raised, a stack trace is captured and is displayed as part of the language's error handling. Go error types are simply basic values and no such magic happens to tell you what line or what stack an error came from. The errors package fixes this by optionally (but by default) capturing the stack trace as part of your error. This behavior can be turned off and on for specific error classes and comes in two flavors. You can have the stack trace be appended to the error's Error() message, or you can have the stack trace be logged immediately, every time an error of that type is instantiated. Every error and error class supports hierarchical settings, in the sense that if a setting was not explicitly set on that error or error class, setting resolution traverses the error class hierarchy until it finds a valid setting, or returns the default. See CaptureStack()/NoCaptureStack() and LogOnCreation()/NoLogOnCreation() for how to control this feature. These hierarchical settings (for whether or not errors captured or logged stack traces) were so useful, we generalized the system to allow users to extend the errors system with their own values. A user can tag a specific error with some value given a statically defined key, or tag a whole error class subtree. Arbitrary error values can easily handle situtations like net.Error's Temporary() field, where some errors are temporary and others aren't. This can be mimicked as follows: Another great example of arbitrary error value functionality is the errhttp subpackage. See the errhttp source for more examples of how to use SetData/GetData. The errhttp package really helped clean up our error code. Take a look to see if it can help your error handling with HTTP stacks too. http://godoc.org/github.com/spacemonkeygo/errors/errhttp So you have stack traces, which tells you how the error was generated, but perhaps you're interested in keeping track of how the error was handled? Every time you call errors.Record(err), it adds the current line information to the error's output. As an example: errors.Record will help you keep track of which error handling branch your code took. There's a few different types of ErrorGroup utilities in this package, but they all work the same way. Make sure to check out the ErrorGroup example. CatchPanic helps you easily manage functions that you think might panic, and instead return errors. CatchPanic works by taking a pointer to your named error return value. Check out the CatchPanic example for more. [1] This errors package started while porting a large Python codebase to Go. https://www.spacemonkey.com/blog/posts/go-space-monkey
Package klog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to standard error. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package wmenu creates menus for cli programs. It uses wlog for it's interface with the command line. It uses os.Stdin, os.Stdout, and os.Stderr with concurrency by default. wmenu allows you to change the color of the different parts of the menu. This package also creates it's own error structure so you can type assert if you need to. wmenu will validate all responses before calling any function. It will also figure out which function should be called so you don't have to.
Package conf package provides tools for easily loading program configurations from multiple sources such as the command line arguments, environment, or a configuration file. Most applications only need to use the Load function to get their settings loaded into an object. By default, Load will read from a configurable file defined by the -config-file command line argument, load values present in the environment, and finally load the program arguments. The object in which the configuration is loaded must be a struct, the names and types of its fields are introspected by the Load function to understand how to load the configuration. The name deduction from the struct field obeys the same rules than those implemented by the standard encoding/json package, which means the program can set the "conf" tag to override the default field names in the command line arguments and configuration file. A "help" tag may also be set on the fields of the configuration object to add documentation to the setting, which will be shown when the program is asked to print its help. When values are loaded from the environment the Load function looks for variables matching the struct fields names in snake-upper-case form.
Package sprout provides types and utilities for implementing client and server programs that speak the Sprout Protocol. The Sprout Protocol is specified here: https://man.sr.ht/~whereswaldon/arborchat/specifications/sprout.md NOTE: this package requires using a fork of golang.org/x/crypto, and you must therefore include the following in your `go.mod`: This package exports several important types. The Conn type wraps a connection-oriented transport (usually a TCP connection) and provides methods for sending sprout messages and reading sprout messages off of the connection. It has a number of exported fields which are functions that should handle incoming messages. These must be set by the user, and their behavior should conform to the Sprout specification. If using a Conn directly, be sure to invoke the ReadMessage() method properly to ensure that you receive repies. The Worker type wraps a Conn and provides automatic implementations of both the handler functions for each sprout message and the processing loop that will read new messages and dispatch their handlers. You can send messages on a worker by calling Conn methods via struct embedding. It has an exported embedded Conn. The Conn type has both synchronous and asynchronous methods for sending messages. The synchronous ones block until they recieve a response or their timeout channel emits a value. Details on how to use these methods follow. Note: The Send* methods The non-Async methods block until the get a response or until their timeout is reached. There are several cases in which will return an error: - There is a network problem sending the message or receiving the response - There is a problem creating the outbound message or parsing the inbound response - The status message received in response is not sprout.StatusOk. In this case, the error will be of type sprout.Status The recommended way to invoke synchronous Send*() methods is with a time.Ticker as the input channel, like so: Note: The Send*Async methods The Async versions of each send operation provide more granular control over blocking behavior. They return a chan interface{}, but will never send anything other than a sprout.Status or sprout.Response over that channel. It is safe to assume that the value will be one of those two. The Async versions also return a handle for the request called a MessageID. This can be used to cancel the request in the event that it doesn't have a response or the response no longer matters. This can be done manually using the Cancel() method on the Conn type. The synchronous version of each send method handles this for you, but it must be done manually with the async variant. An example of the appropriate use of an async method:
Package ccgo translates C to Go source code. This v3 package is obsolete. Please use current ccgo/v4: Invocation 2021-12-23: v3.13.0 add clang support. To compile the resulting Go programs the package modernc.org/libc has to be installed. CCGO_CPP selects which command is used by the C front end to obtain target configuration. Defaults to `cpp`. Ignored when --load-config <path> is used. TARGET_GOARCH selects the GOARCH of the resulting Go code. Defaults to $GOARCH or runtime.GOARCH if $GOARCH is not set. Ignored when --load-config <path> is used. TARGET_GOOS selects the GOOS of the resulting Go code. Defaults to $GOOS or runtime.GOOS if $GOOS is not set. Ignored when --load-config <path> is used. To compile for the host invoke something like To cross compile set TARGET_GOARCH and/or TARGET_GOOS, not GOARCH/GOOS. Cross compile depends on availability of C stdlib headers for the target platform as well on the set of predefined macros for the target platform. For example, to cross compile on a Linux host, targeting windows/amd64, it's necessary to have mingw64 installed in $PATH. Then invoke something like Only files with extension .c, .h or .json are recognized as input files. A .json file is interpreted as a compile database. All other command line arguments following the .json file are interpreted as items that should be found in the database and included in the output file. Each item should be on object file (.o) or static archive (.a) or a command (no extension). Command line options requiring an argument. -Dfoo Equals `#define foo 1`. -Dfoo=bar Equals `#define foo bar`. -Ipath Add path to the list of include files search path. The option is a capital letter I (India), not a lowercase letter l (Lima). -limport-path The package at <import-path> must have been produced without using the -nocapi option, ie. the package must have a proper capi_$GOOS_$GOARCH.go file. The option is a lowercase letter l (Lima), not a capital letter I (India). -Ufoo Equals `#undef foo`. -compiledb name When this option appears anywhere, most preceding options are ignored and all following command line arguments are interpreted as a command with arguments that will be executed to produce the compilation database. For example: This will execute `make -DFOO -w` and attempts to extract the compile and archive commands. Only POSIX operating systems are supported. The supported build system must output information about entering directories that is compatible with GNU make. The only compilers supported are `gcc` and `clang`. The only archiver supported is `ar`. Format specification: https://clang.llvm.org/docs/JSONCompilationDatabase.html Note: This option produces also information about libraries created with `ar cr` and include it in the json file, which is above the specification. -crt-import-path path Unless disabled by the -nostdlib option, every produced Go file imports the C runtime library. Default is `modernc.org/libc`. -export-defines "" Export C numeric/string defines as Go constants by capitalizing the first letter of the define's name. -export-defines prefix Export C numeric/string defines as Go constants by prefixing the define's name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-enums "" Export C enum constants as Go constants by capitalizing the first letter of the enum constant name. -export-enums prefix Export C enum constants as Go constants by prefixing the enum constant name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-externs "" Export C extern definitions as Go definitions by capitalizing the first letter of the definition name. -export-externs prefix Export C extern definitions as Go definitions by prefixing the definition name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-fields "" Export C struct fields as Go fields by capitalizing the first letter of the field name. -export-fields prefix Export C struct fields as Go fields by prefixing the field name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-structs "" Export tagged C struct/union types as Go types by capitalizing the first letter of the tag name. -export-structs prefix Export tagged C struct/union types as Go types by prefixing the tag name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-typedefs "" Export C typedefs as Go types by capitalizing the first letter of the typedef name. -export-structs prefix Export C typedefs as as Go types by prefixing the typedef name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -static-locals-prefix prefix Prefix C static local declarators names with 'prefix'. -host-config-cmd command This option has the same effect as setting `CCGO_CPP=command`. -host-config-opts comma-separated-list The separated items of the list are added to the invocation of the configuration command. -pkgname name Set the resulting Go package name to 'name'. Defaults to `main`. -script filename Ccgo does not yet have a concept of object files. All C files that are needed for producing the resulting Go file have to be compiled together and "linked" in memory. There are some problems with this approach, one of them is the situation when foo.c has to be compiled using, for example `-Dbar=42` and "linked" with baz.c that needs to be compiled with `-Dbar=314`. Or `bar` must not defined at all for baz.c, etc. A script in a named file is a CSV file. It is opened like this (error handling omitted): The first field of every record in the CSV file is the directory to use. The remaining fields are the arguments of the ccgo command. This way different C files can be translated using different options. The CSV file may look something like: -volatile comma-separated-list The separated items of the list are added to the list of file scope extern variables the will be accessed atomically, like if their C declarator used the 'volatile' type specifier. Currently only C scalar types of size 4 and 8 bytes are supported. Other types/sizes will ignore both the volatile specifier and the -volatile option. -save-config path This option copies every header included during compilation or compile database generation to a file under the path argument. Additionally the host configuration, ie. predefined macros, include search paths, os and architecture is stored in path/config.json. When this option is used, no Go code is generated, meaning no link phase occurs and thus the memory consumption should stay low. Passing an empty string as an argument of -save-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. This option is ignored when -compiledb <path> is used. --load-config path Note that this option must have the double dash prefix to distinguish it from -lfoo, the [traditional] short form of `-l foo`. This option configures the compiler using path/config.json. The include paths are adjusted to be relative to path. For example: Assume on machine A the default C preprocessor reports a system include search path "/usr/include". Running ccgo on A with -save-config /tmp/foo to compile foo.c that #includes <stdlib.h>, which is found in /usr/include/stdlib.h on the host results in Assume /tmp/foo from machine A will be recursively copied to machine B, that may run a different operating system and/or architecture. Let the copy be for example in /tmp/bar. Using --load-config /tmp/bar will instruct ccgo to configure its preprocessor with a system include path /tmp/bar/usr/include and thus use the original machine A stdlib.h found there. When the --load-config is used, no host configuration from a machine B cross C preprocessor/compiler is needed to transpile the foo.c source on machine B as if the compiler would be running on machine A. The particular usefulness of this mechanism is for transpiling big projects for 32 bit architectures. There the lack if ccgo having an object format and thus linking everything in RAM can need too much memory for the system to handle. The way around this is possibly to run something like on machine A, transfer path/* to machine B and run the link phase there with eg. Note that the C sources for the project must be in the same path on both machines because the compile database stores absolute paths. It might be convenient to put the sources in path/src, the config in path/config, for example, and transfer the [archive of] path/ to the same directory on the second machine. That also solves the issue when ./configure generates files and the result differs per operating system or architecture. Passing an empty string as an argument of -load-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. These command line options don't take arguments. -E When this option is present the compiler does not produce any Go files and instead prints the preprocessor output to stdout. -all-errors Normally only the first 10 or so errors are shown. With this option the compiler will show all errors. -header Using this option suppresses producing of any function definitions. This is possibly useful for producing Go files from C header files. Including function signatures with -header. -func-sig Add this option to include fucntion signature when compiling headers (using -header). -nostdinc This option disables the default C include search paths. -nostdlib This option disables importing of the runtime library by the resulting Go code. -trace-pinning This option will print the positions and names of local declarators that are being pinned. -version Ignore all other options, print version and exit. -verbose-compiledb Enable verbose output when -compiledb is present. -ignore-undefined This option tells the linker to not insist on finding definitions for declarators that are not implicitly declared and used - but not defined. This might be useful when the intent is to define the missing function in Go functions manually. Name conflict resolution for such declarator names may or may not be applied. -ignore-unsupported-alignment This option tells the compiler to not complain about alignments that Go cannot support. -trace-included-files This option outputs the path names of all included files. This option is ignored when -compiledb <path> is used. There may exist other options not listed above. Those should be considered temporary and/or unsupported and may be removed without notice. Alternatively, they may eventually get promoted to "documented" options.
Package wasmtime is a WebAssembly runtime for Go powered by Wasmtime. This package provides everything necessary to compile and execute WebAssembly modules as part of a Go program. Wasmtime is a JIT compiler written in Rust, and can be found at https://github.com/bytecodealliance/wasmtime. This package is a binding to the C API provided by Wasmtime. The API of this Go package is intended to mirror the Rust API (https://docs.rs/wasmtime) relatively closely, so if you find something is under-documented here then you may have luck consulting the Rust documentation as well. As always though feel free to file any issues at https://github.com/bytecodealliance/wasmtime-go/issues/new. It's also worth pointing out that the authors of this package up to this point primarily work in Rust, so if you've got suggestions of how to make this package more idiomatic for Go we'd love to hear your thoughts! An example of a wasm module which calculates the GCD of two numbers An example of instantiating a small wasm module which imports functionality from the host, then calling into wasm which calls back into the host.
Package openvg is a wrapper to a C library of high-level 2D graphics operations built on OpenVG 1.1 The typical "hello world" program looks like this: The Init function provides the necessary graphics subsystem initialization and the dimensions of the whole canvas. The Init() call must be paired with a corresponding Finish() call, which performs an orderly shutdown. Typically a "drawing" begins with the Start() call, and ends with End(). A program can have an arbitrary set of Start()/End() pairs. The coordinate system uses float64 coordinates, with the origin at the lower left, with x increasing to the right, and y increasing upwards. Currently, the library provides no mouse or keyboard events, other than those provided by the base operating system. It is typical to pause for user input between drawings by reading standard input. The library's functionally includes shapes, attributes, transformations, text, images, and convenince functions. Shape functions include Polygon, Polyline, Cbezier, Qbezier, Rect, Roundrect, Line, Elipse, Circle, and Arc. Transformation functions are: Translate, Rotate, Shear, and Scale. For displaying and measuring text: Text, TextMid, TextEnd, and TextWidth. The attribute functions are StrokeColor, StrokeRGB, StrokeWidth, and FillRGB, FillColor, FillLinearGradient, and FillRadialGradient. Colors are specfied with RGB triples (0-255) with alpha values (0.0-1.0), or named colors as specified by the SVG standard. Convenience functions are used to set the Background color, start the drawing with a background color, and save the raster to a file. The input terminal may be set/restored to/from raw and cooked mode. Package openvg is a high-level 2D vector graphics library built on OpenVG
Package littleboss creates self-supervising Go binaries. A self-supervising binary starts itself as a child process. The parent process becomes a supervisor that is responsible for monitoring, managing I/O, restarting, and reloading the child. Make a program use littleboss by modifying the main function: By default the supervisor is bypassed and the program executes directly. A flag, -littleboss, is added to the binary. It can be used to start a supervised binary and manage it: Supervisor options are baked into the binary. The Littleboss struct type contains fields that can be set before calling the Run method to configure the supervisor.
Package bender makes it easy to build load testing applications for services using protocols like HTTP, Thrift, Protocol Buffers and many others. Bender provides two different approaches to load testing. The first, LoadTestThroughput, gives the tester control over the throughput (QPS), but not over the concurrency (number of goroutines). The second, LoadTestConcurrency, gives the tester control over the concurrency, but not over the throughput. LoadTestThroughput simulates the load caused by concurrent clients sending requests to a service. It can be used to simulate a target throughput (QPS) and to measure the request latency and error rate at that throughput. The load tester will keep spawning goroutines to send requests, even if the service is sending errors or hanging, making this a good way to test the actual behavior of the service under heavy load. This is the same approach used by Twitter's Iago library, and is nearly always the right place to start when load testing services exposed (directly or indirectly) to the Internet. LoadTestConcurrency simulates a fixed number of clients, each of which sends a request, waits for a response and then repeats. The downside to this approach is that increased latency from the service results in decreased throughput from the load tester, as the simulated clients are all waiting for responses. That makes this a poor way to test services, as real-world traffic doesn't behave this way. The best use for this function is to test services that need to handle a lot of concurrent connections, and for which you need to simulate many connections to test resource limits, latency and other metrics. This approach is used by load testers like the Grinder and JMeter, and has been critiqued well by Gil Tene in his talk "How Not To Measure Latency". The next two sections provide more detail on the implementations of LoadTestThroughput and LoadTestConcurrency. The following sections provide descriptions for the common arguments to the load testing functions, and how they work, including the interval generators, request generators, request executors and event recorders. The LoadTestThroughput function takes four arguments. The first is a function that generates nanosecond intervals which are used as request arrival times. The second is a channel of requests. The third is a function that knows how to send a request and validate the response. The inner loop of LoadTestThroughput looks like this: The fourth argument to LoadTestThroughput is a channel which is used to output events. There are events for the start and end of the load test, the sending of each request and the receiving of each response and the wait time between sending requests. The wait message includes an "overage" time which is useful for monitoring the health of the load test program and underlying OS and host. The overage time measures the difference between the expected wait time (the interval time) and the actual wait time. On a heavily loaded host, or when there are long GC pauses, that difference can be large. Bender attempts to compensate for the overage by reducing the subsequent wait times, but under heavy load, the overage will continue to increase until it cannot be compensated for. At that point the wait events will report a monotonically increasing overage which means the load test isn't keeping up with the desired throughput. A load test ends when the request channel is closed and all remaining requests in the channel have been executed. The LoadTestConcurrency function takes four arguments. The first is a semaphore that controls the maximum number of concurrently executing requests, and makes it possible to dynamically control that number over the lifetime of the load test. The second, third and fourth arguments are identical to those for LoadTestThroughput. The inner loop of LoadTestConcurrency does something like this: Reducing the semaphore count will reduce the number of running connections as existing connections complete, so there can be some lag between calling workerSem.Wait(n) and the number of running connections actually decreasing by n. The worker semaphore does not protect you from reducing the number of workers below zero, which will cause undefined behavior from the load tester. As with LoadTestThroughput, the load test ends when the request channel is closed and all remaining requests have been executed. An IntervalGenerator is a function that takes the current Unix epoch time (in nanoseconds) and returns a non-negative time (also in nanoseconds) until the next request should be sent. Bender provides functions to create interval generators for uniform and exponential distributions, each of which takes the target throughput (requests per second) and returns an IntervalGenerator. Neither of the included generators makes use of the function argument, but it is there for cases in which the simulated intervals are time dependent (you want to simulate the daily traffice variation of a web site, for example). The request channel decouples creation of requests from execution of requests and allows them to run concurrently. A typical approach to creating a request channel is code like this: Requests can be generated randomly, read from files (like access logs) or generated any other way you like. The important part is that the request generation be done in a separate goroutine that communicates with the load tester via a channel. In addition, the channel must be closed to indicate that the load test is done. The requests channel should almost certainly be buffered, unless you can generate requests much faster than they are sent (and not just on average). The easiest way to miss your target throughput with LoadTestThroughput is to be blocked waiting for requests to be generated, particularly when testing a large throughput. A request executor is a function that takes the current Unix Epoch time (in nanoseconds) and a *Request, sends the request to the service, waits for the response, optionally validates it and returns an error or nil. This function is timed by the load tester, so it should do as little else as possible, and everything it does will be added to the reported service latency. Here, for example, is a very simple request executor for HTTP requests: The http package in Bender provides a function that generates executors that make use of the http packages Transport and Client classes and provide an easy way to validate the body of the http request. RequestExecutors are called concurrently from multiple goroutines, and must be concurrency-safe. The LoadTestThroughput and LoadTestConcurrency functions both take a channel of events (represented as interface{}) as a parameter. This channel is used to output events as they happen during the load test, including the following events: StartEvent: sent once at the start of the load test. EndEvent: sent once at the end of the load test, no more events are sent after this. WaitEvent: sent only for LoadTestThroughput, see below for details. StartRequestEvent: sent before a request is sent to the service, includes the request and the event time. Note that the event time is not the same as the start time for the request for stupid performance reasons. If you need to know the actual start time, see the EndRequestEvent. EndRequestEvent: sent after a request has finished, includes the response, the actual start and end times for the request and any error returned by the RequestExecutor. The WaitEvent includes the time until the next request is sent (in nanoseconds) and an "overage" time. When the inner loop sleeps, it subtracts the total time slept from the time it intended to sleep, and adds that to the overage. The overage, therefore, is a good proxy for how overloaded the load testing host is. If it grows over time, that means the load test is falling behind, and can't start enough goroutines to run all the requests it needs to. In that case you will need a more powerful load testing host, or need to distribute the load test across more hosts. The event channel doesn't need to be buffered, but it may help if you find that Bender isn't sending as much throughput as you expect. In general, this depends a lot on how quickly you are consuming events from the channel, and how quickly the load tester is running. It is a good practice to proactively buffer this channel.
Package wmenu creates menus for cli programs. It uses wlog for it's interface with the command line. It uses os.Stdin, os.Stdout, and os.Stderr with concurrency by default. wmenu allows you to change the color of the different parts of the menu. This package also creates it's own error structure so you can type assert if you need to. wmenu will validate all responses before calling any function. It will also figure out which function should be called so you don't have to.
Package routes a simple http routing API for the Go programming language, compatible with the standard http.ListenAndServe function. Create a new route multiplexer: Define a simple route with a given method (ie Get, Put, Post ...), path and http.HandleFunc. Define a route with restful parameters in the path: The parameters are parsed from the URL, and appended to the Request URL's query parameters. More control over the route's parameter matching is possible by providing a custom regular expression: To start the web server, use the standard http.ListenAndServe function, and provide the route multiplexer:
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package tao implements a light-weight TCP network programming framework. Server represents a TCP server with various ServerOption supported. 1. Provides custom codec by CustomCodecOption; 2. Provides TLS server by TLSCredsOption; 3. Provides callback on connected by OnConnectOption; 4. Provides callback on meesage arrived by OnMessageOption; 5. Provides callback on closed by OnCloseOption; 6. Provides callback on error occurred by OnErrorOption; ServerConn represents a connection on the server side. ClientConn represents a connection connect to other servers. You can make it reconnectable by passing ReconnectOption when creating. AtomicInt64, AtomicInt32 and AtomicBoolean are providing concurrent-safe atomic types in a Java-like style while ConnMap is a go-routine safe map for connection management. Every handler function is defined as func(context.Context, WriteCloser). Usually a meesage and a net ID are shifted within the Context, developers can retrieve them by calling the following functions. Programmers are free to define their own request-scoped data and put them in the context, but must be sure that the data is safe for multiple go-routines to access. Every message must define according to the interface and a deserialization function: There is a TypeLengthValueCodec defined, but one can also define his/her own codec: TimingWheel is a safe timer for running timed callbacks on connection. WorkerPool is a go-routine pool for running message handlers, you can fetch one by calling func WorkerPoolInstance() *WorkerPool.
Package amboy provides basic infrastructure for running and describing jobs and job workflows with, potentially, minimal overhead and additional complexity. Amboy works with 4 basic logical objects: jobs representing work; runners, which are responsible for executing jobs; queues, that represent pipelines and offline workflows of jobs (i.e. not real time, processes that run outside of the primary execution path of a program); and dependencies that represent relationships between jobs. The inspiration for amboy was to be able to provide a unified way to define and run jobs, that would feel equally "native" for distributed applications and distributed web application, and move easily between different architectures. While amboy users will generally implement their own Job and dependency implementations, Amboy itself provides several example Queue implementations, as well as several generic examples and prototypes of Job and dependency.Manager objects. Generally speaking you should be able to use included amboy components to provide the queue and runner components, in conjunction with custom and generic job and dependency variations. Consider the following example: The amboy package proves a number of generic methods that, using the Queue.Stats() method, block until all jobs are complete. They provide different semantics, which may be useful in different circumstances. All of the Wait* functions wait until the total number of jobs submitted to the queue is equal to the number of completed jobs, and as a result these methods don't prevent other threads from adding jobs to the queue after beginning to wait. As a special case, retryable queues will also wait until there are no retrying jobs remaining. Additionally, there are a set of methods, WaitJob*, that allow callers to wait for a specific job to complete.
Package mnemonics is a package that converts []byte's into human-friendly phrases, using common words pulled from a dictionary. The dictionary size is 1626, and multiple languages are supported. Each dictionary supports modified phrases. Only the first few characters of each word are important. These characters form a unique prefix. For example, in the English dictionary, the unique prefix len (EnglishUniquePrefixLen) is 3, which means the word 'abbey' could be replaced with the word 'abbot', and the program would still run as expected. The primary purpose of this library is creating human-friendly cryptographically secure passwords. A cryptographically secure password needs to contain between 128 and 256 bits of entropy. Humans are typically incapable of generating sufficiently secure passwords without a random number generator, and 256-bit random numbers tend to difficult to memorize and even to write down (a single mistake in the writing, or even a single somewhat sloppy character can render the backup useless). By using a small set of common words instead of random numbers, copying errors are more easily spotted and memorization is also easier, without sacrificing password strength. The mnemonics package does not have any functions for actually generating entropy, it just converts existing entropy into human-friendly phrases.
Package rand implements pseudo-random number generators unsuitable for security-sensitive work. Top-level functions that do not have a Rand parameter, such as Float64 and Int, use non-deterministic goroutine-local pseudo-random data sources that produce different sequences of values each time a program is run. These top-level functions are safe for concurrent use by multiple goroutines, and their performance does not degrade when the parallelism increases. Rand methods and functions with Rand parameter are not safe for concurrent use, but should generally be preferred because of determinism, higher speed and quality. This package is considerably faster and generates higher quality random than the math/rand package. However, this package's outputs might be predictable regardless of how it's seeded. For random numbers suitable for security-sensitive work, see the crypto/rand package.
Package configdir provides a cross platform means of detecting the system's configuration directories. This makes it easy to program your Go app to store its configuration files in a standard location relevant to the host operating system. For Linux and some other Unixes (like BSD) this means following the Freedesktop.org XDG Base Directory Specification 0.8, and for Windows and macOS it uses their standard directories. This is a simple no-nonsense module that just gives you the paths, with optional components tacked on the end for vendor- or app-specific namespacing. It also provides a convenience function to call `os.MkdirAll()` on the paths to ensure they exist and are ready for you to read and write files in. Standard Global Configuration Paths Standard User-Specific Configuration Paths Standard User-Specific Cache Paths Quick start example for a common use case of this module.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package stomp provides operations that allow communication with a message broker that supports the STOMP protocol. STOMP is the Streaming Text-Oriented Messaging Protocol. See http://stomp.github.com/ for more details. This package provides support for all STOMP protocol features in the STOMP protocol specifications, versions 1.0, 1.1 and 1.2. These features including protocol negotiation, heart-beating, value encoding, and graceful shutdown. Connecting to a STOMP server is achieved using the stomp.Dial function, or the stomp.Connect function. See the examples section for a summary of how to use these functions. Both functions return a stomp.Conn object for subsequent interaction with the STOMP server. Once a connection (stomp.Conn) is created, it can be used to send messages to the STOMP server, or create subscriptions for receiving messages from the STOMP server. Transactions can be created to send multiple messages and/ or acknowledge multiple received messages from the server in one, atomic transaction. The examples section has examples of using subscriptions and transactions. The client program can instruct the stomp.Conn to gracefully disconnect from the STOMP server using the Disconnect method. This will perform a graceful shutdown sequence as specified in the STOMP specification. This package also exports types that represent a STOMP frame, and operate on STOMP frames. These types include stomp.Frame, stomp.Reader, stomp.Writer and stomp.Validator. While a program can use this package to communicate with a STOMP server without using these types directly, they are useful for implementing a STOMP server in go. The server subpackage provides a simple implementation of a STOMP server that is useful for testing and could be useful for applications that require a simple, STOMP-compliant message broker. Source code and other details for the project are available at GitHub:
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `siris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/go-siris/siris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `siris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`siris#Addr, siris#Listener, siris#Server, siris#TLS, siris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be set to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Elvish is a cross-platform shell, supporting Linux, BSDs and Windows. It features an expressive programming language, with features like namespacing and anonymous functions, and a fully programmable user interface with friendly defaults. It is suitable for both interactive use and scripting.
Package goaction enables writing Github Actions in Go. The idea is: write a standard Go script, one that works with `go run`, and use it as Github action. The script's inputs - flags and environment variables, are set though the Github action API. This project will generate all the required files for the script (This generation can be done automattically with Github action integration). The library also exposes neat API to get workflow information. - [x] Write a Go script. - [x] Add `goaction` configuration in `.github/workflows/goaction.yml`. - [x] Push the project to Github. See simplest example for a Goaction script: (posener/goaction-example) https://github.com/posener/goaction-example, or an example that demonstrait using Github APIs: (posener/goaction-issues-example) https://github.com/posener/goaction-issues-example. Write Github Action by writing Go code! Just start a Go module with a main package, and execute it as a Github action using Goaction, or from the command line using `go run`. A go executable can get inputs from the command line flag and from environment variables. Github actions should have a `action.yml` file that defines this API. Goaction bridges the gap by parsing the Go code and creating this file automatically for you. The main package inputs should be defined with the standard `flag` package for command line arguments, or by `os.Getenv` for environment variables. These inputs define the API of the program and `goaction` automatically detect them and creates the `action.yml` file from them. Additionally, goaction also provides a library that exposes all Github action environment in an easy-to-use API. See the documentation for more information. Code segments which should run only in Github action (called "CI mode"), and not when the main package runs as a command line tool, should be protected by a `if goaction.CI { ... }` block. In order to convert the repository to a Github action, goaction command line should run on the **"main file"** (described above). This command can run manually (by ./cmd/goaction) but luckily `goaction` also comes as a Github action :-) Goaction Github action keeps the Github action file updated according to the main Go file automatically. When a PR is made, goaction will post a review explaining what changes to expect. When a new commit is pushed, Goaction makes sure that the Github action files are updated if needed. Add the following content to `.github/workflows/goaction.yml` ./action.yml: A "metadata" file for Github actions. If this file exists, the repository is considered as Github action, and the file contains information that instructs how to invoke this action. See (metadata syntax) https://help.github.com/en/actions/building-actions/metadata-syntax-for-github-actions. for more info. ./Dockerfile: A file that contains instructions how to build a container, that is used for Github actions. Github action uses this file in order to create a container image to the action. The container can also be built and tested manually: Goaction parses Go script file and looks for annotations that extends the information that exists in the function calls. Goaction annotations are a comments that start with `//goaction:` (no space after slashes). They can only be set on a `var` definition. The following annotations are available: * `//goaction:required` - sets an input definition to be "required". * `//goaction:skip` - skips an input out output definition. * `//goaction:description <description>` - add description for `os.Getenv`. * `//goaction:default <value>` - add default value for `os.Getenv`. A list of projects which are using Goaction (please send a PR if your project uses goaction and does not appear her). * (posener/goreadme) http://github.com/posener/goreadme
Package testscript provides support for defining filesystem-based tests by creating scripts in a directory. To invoke the tests, call testscript.Run. For example: A testscript directory holds test scripts with extension txtar or txt run during 'go test'. Each script defines a subtest; the exact set of allowable commands in a script are defined by the parameters passed to the Run function. To run a specific script foo.txtar or foo.txt, run where TestName is the name of the test that Run is called from. To define an executable command (or several) that can be run as part of the script, call RunMain with the functions that implement the command's functionality. The command functions will be called in a separate process, so are free to mutate global variables without polluting the top level test binary. In general script files should have short names: a few words, not whole sentences. The first word should be the general category of behavior being tested, often the name of a subcommand to be tested or a concept (vendor, pattern). Each script is a text archive (go doc golang.org/x/tools/txtar). The script begins with an actual command script to run followed by the content of zero or more supporting files to create in the script's temporary file system before it starts executing. As an example: Each script runs in a fresh temporary work directory tree, available to scripts as $WORK. Scripts also have access to these other environment variables: The environment variable $exe (lowercase) is an empty string on most systems, ".exe" on Windows. The script's supporting files are unpacked relative to $WORK and then the script begins execution in that directory as well. Thus the example above runs in $WORK with $WORK/hello.txtar containing the listed contents. The lines at the top of the script are a sequence of commands to be executed by a small script engine in the testscript package (not the system shell). The script stops and the overall test fails if any particular command fails. Each line is parsed into a sequence of space-separated command words, with environment variable expansion and # marking an end-of-line comment. Adding single quotes around text keeps spaces in that text from being treated as word separators and also disables environment variable expansion. Inside a single-quoted block of text, a repeated single quote indicates a literal single quote, as in: A line beginning with # is a comment and conventionally explains what is being done or tested at the start of a new phase in the script. A special form of environment variable syntax can be used to quote regexp metacharacters inside environment variables. The "@R" suffix is special, and indicates that the variable should be quoted. The command prefix ! indicates that the command on the rest of the line (typically go or a matching predicate) must fail, not succeed. Only certain commands support this prefix. They are indicated below by [!] in the synopsis. The command prefix [cond] indicates that the command on the rest of the line should only run when the condition is satisfied. The predefined conditions are: Any known values of GOOS and GOARCH can also be used as conditions. They will be satisfied if the target OS or architecture match the specified value. For example, the condition [darwin] is true if GOOS=darwin, and [amd64] is true if GOARCH=amd64. A condition can be negated: [!short] means to run the rest of the line when testing.Short() is false. Additional conditions can be added by passing a function to Params.Condition. The predefined commands are: cd dir Change to the given directory for future commands. chmod perm path... Change the permissions of the files or directories named by the path arguments to the given octal mode (000 to 777). [!] cmp file1 file2 Check that the named files have (or do not have) the same content. By convention, file1 is the actual data and file2 the expected data. File1 can be "stdout" or "stderr" to use the standard output or standard error from the most recent exec or wait command. (If the files have differing content and the command is not negated, the failure prints a diff.) [!] cmpenv file1 file2 Like cmp, but environment variables in file2 are substituted before the comparison. For example, $GOOS is replaced by the target GOOS. cp src... dst Copy the listed files to the target file or existing directory. src can include "stdout" or "stderr" to use the standard output or standard error from the most recent exec or go command. env [key=value...] With no arguments, print the environment (useful for debugging). Otherwise add the listed key=value pairs to the environment. [!] exec program [args...] [&] Run the given executable program with the arguments. It must (or must not) succeed. Note that 'exec' does not terminate the script (unlike in Unix shells). If the last token is '&', the program executes in the background. The standard output and standard error of the previous command is cleared, but the output of the background process is buffered — and checking of its exit status is delayed — until the next call to 'wait', 'skip', or 'stop' or the end of the test. At the end of the test, any remaining background processes are terminated using os.Interrupt (if supported) or os.Kill. If the last token is '&word&` (where "word" is alphanumeric), the command runs in the background but has a name, and can be waited for specifically by passing the word to 'wait'. Standard input can be provided using the stdin command; this will be cleared after exec has been called. [!] exists [-readonly] file... Each of the listed files or directories must (or must not) exist. If -readonly is given, the files or directories must be unwritable. [!] grep [-count=N] pattern file The file's content must (or must not) match the regular expression pattern. For positive matches, -count=N specifies an exact number of matches to require. mkdir path... Create the listed directories, if they do not already exists. mv path1 path2 Rename path1 to path2. OS-specific restrictions may apply when path1 and path2 are in different directories. rm file... Remove the listed files or directories. skip [message] Mark the test skipped, including the message if given. [!] stderr [-count=N] pattern Apply the grep command (see above) to the standard error from the most recent exec or wait command. stdin file Set the standard input for the next exec command to the contents of the given file. File can be "stdout" or "stderr" to use the standard output or standard error from the most recent exec or wait command. [!] stdout [-count=N] pattern Apply the grep command (see above) to the standard output from the most recent exec or wait command. stop [message] Stop the test early (marking it as passing), including the message if given. symlink file -> target Create file as a symlink to target. The -> (like in ls -l output) is required. wait [command] Wait for all 'exec' and 'go' commands started in the background (with the '&' token) to exit, and display success or failure status for them. After a call to wait, the 'stderr' and 'stdout' commands will apply to the concatenation of the corresponding streams of the background commands, in the order in which those commands were started. If an argument is specified, it waits for just that command. When TestScript runs a script and the script fails, by default TestScript shows the execution of the most recent phase of the script (since the last # comment) and only shows the # comments for earlier phases. For example, here is a multi-phase script with a bug in it (TODO: make this example less go-command specific): The bug is that the final phase installs p11 instead of p1. The test failure looks like: Note that the commands in earlier phases have been hidden, so that the relevant commands are more easily found, and the elapsed time for a completed phase is shown next to the phase heading. To see the entire execution, use "go test -v", which also adds an initial environment dump to the beginning of the log. Note also that in reported output, the actual name of the per-script temporary directory has been consistently replaced with the literal string $WORK. If Params.TestWork is true, it causes each test to log the name of its $WORK directory and other environment variable settings and also to leave that directory behind when it exits, for manual debugging of failing tests:
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behavior is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behavior of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package wasmtime is a WebAssembly runtime for Go powered by Wasmtime. This package provides everything necessary to compile and execute WebAssembly modules as part of a Go program. Wasmtime is a JIT compiler written in Rust, and can be found at https://github.com/bytecodealliance/wasmtime. This package is a binding to the C API provided by Wasmtime. The API of this Go package is intended to mirror the Rust API (https://docs.rs/wasmtime) relatively closely, so if you find something is under-documented here then you may have luck consulting the Rust documentation as well. As always though feel free to file any issues at https://github.com/bytecodealliance/wasmtime-go/issues/new. It's also worth pointing out that the authors of this package up to this point primarily work in Rust, so if you've got suggestions of how to make this package more idiomatic for Go we'd love to hear your thoughts! An example of a wasm module which calculates the GCD of two numbers An example of instantiating a small wasm module which imports functionality from the host, then calling into wasm which calls back into the host.
Package frauddetector provides the API client, operations, and parameter types for Amazon Fraud Detector. This is the Amazon Fraud Detector API Reference. This guide is for developers who need detailed information about Amazon Fraud Detector API actions, data types, and errors. For more information about Amazon Fraud Detector features, see the Amazon Fraud Detector User Guide. We provide the Query API as well as AWS software development kits (SDK) for Amazon Fraud Detector in Java and Python programming languages. The Amazon Fraud Detector Query API provides HTTPS requests that use the HTTP verb GET or POST and a Query parameter Action . AWS SDK provides libraries, sample code, tutorials, and other resources for software developers who prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS. These libraries provide basic functions that automatically take care of tasks such as cryptographically signing your requests, retrying requests, and handling error responses, so that it is easier for you to get started. For more information about the AWS SDKs, go to Tools to build on AWSpage, scroll down to the SDK section, and choose plus (+) sign to expand the section.
Package try emulates aspects of the ill-fated "try" proposal using generics. See https://golang.org/issue/32437 for inspiration. Example usage: This package is a sharp tool and should be used with care. Quick and easy error handling can occlude critical error handling logic. Panic handling generally should not cross package boundaries or be an explicit part of an API. Package try is a good fit for short Go programs and unit tests where development speed is a greater priority than reliability. Since the E functions panic if an error is encountered, recovering in such programs is optional. Code before try: Code after try: The E family of functions all remove a final error return, panicking if non-nil. Handle recovers from that panic and allows assignment of the error to a return error value. Other panics are not recovered. HandleF is like Handle, but it calls a function after any such assignment. F wraps an error with file and line information and calls a function on error. It inter-operates well with testing.TB and log.Fatal. Recover is like F, but it supports more complicated error handling by passing the error and runtime frame directly to a function.
Package leabra is the overall repository for all standard Leabra algorithm code implemented in the Go language (golang) with Python wrappers. This top-level of the repository has no functional code -- everything is organized into the following sub-repositories: * leabra: the core standard implementation with the minimal set of standard mechanisms exclusively using rate-coded neurons -- there are too many differences with spiking, so that is now separated out into a different package. * deep: the DeepLeabra version which performs predictive learning by attempting to predict the activation states over the Pulvinar nucleus of the thalamus (in posterior sensory cortex), which are driven phasically every 100 msec by deep layer 5 intrinsic bursting (5IB) neurons that have strong focal (essentially 1-to-1) connections onto the Pulvinar Thalamic Relay Cell (TRC) neurons. * examples: these actually compile into runnable programs and provide the starting point for your own simulations. examples/ra25 is the place to start for the most basic standard template of a model that learns a small set of input / output patterns in a classic supervised-learning manner. * python: follow the instructions in the README.md file to build a python wrapper that will allow you to fully control the models using Python.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package textrank is an implementation of Text Rank algorithm in Go with extendable features (automatic summarization, phrase extraction). It supports multithreading by goroutines. The package is under The MIT Licence. If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is. - Find the most important phrases. - Find the most important words. - Find the most important N sentences. - Importance by phrase weights. - Importance by word occurrence. - Find the first N sentences, start from Xth sentence. - Find sentences by phrase chains ordered by position in text. - Access to the whole ranked data. - Support more languages. - Algorithm for weighting can be modified by interface implementation. - Parser can be modified by interface implementation. - Multi thread support. Find the most important phrases: This is the most basic and simplest usage of textrank. All possible pre-defined finder queries: After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph. After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure. Adding text continuously: It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data. Using different algorithm to ranking text: There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults. Using multiple graphs: Graph ID exists because it is possible run multiple independent text ranking processes. Using different non-English languages: Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here: https://github.com/stopwords-iso Asynchronous usage by goroutines: It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.
Package wasmtime is a WebAssembly runtime for Go powered by Wasmtime. This package provides everything necessary to compile and execute WebAssembly modules as part of a Go program. Wasmtime is a JIT compiler written in Rust, and can be found at https://github.com/bytecodealliance/wasmtime. This package is a binding to the C API provided by Wasmtime. The API of this Go package is intended to mirror the Rust API (https://docs.rs/wasmtime) relatively closely, so if you find something is under-documented here then you may have luck consulting the Rust documentation as well. As always though feel free to file any issues at https://github.com/bytecodealliance/wasmtime-go/issues/new. It's also worth pointing out that the authors of this package up to this point primarily work in Rust, so if you've got suggestions of how to make this package more idiomatic for Go we'd love to hear your thoughts! An example of a wasm module which calculates the GCD of two numbers An example of instantiating a small wasm module which imports functionality from the host, then calling into wasm which calls back into the host.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://wundergraph.com Created by Jens Neuse
Package libc provides run time support for programs generated by the ccgo C to Go transpiler, version 4 or later. Many C libc functions are not thread safe. Such functions are not safe for concurrent use by multiple goroutines in the Go translation as well. C threads are modeled as Go goroutines. Every such C thread, ie. a Go goroutine, must use its own Thread Local Storage instance implemented by the TLS type. Signal handling in translated C code is not coordinated with the Go runtime. This is probably the same as when running C code via CGo. This package synchronizes its environ with the current Go environ lazily and only once. From Linux man-pages Copyleft
Package autoflags provides a convenient way of exposing struct fields as command line flags. Exposed fields should have special tag attached: After declaring your flags and their default values as above, just register flags with autoflags.Define and call flag.Parse() as usual: Now config struct has its fields populated from command line flags. Call the program with flags to override default values: Package autoflags understands all basic types supported by flag's package xxxVar functions: int, int64, uint, uint64, float64, bool, string, time.Duration. Types implementing flag.Value interface are also supported. Attaching non-empty `flag` tag to field of unsupported type would result in panic.
Package phony is a small actor model library for Go, inspired by the causal messaging system in the Pony programming language. An Actor is an interface satisfied by a lightweight Inbox struct. Structs that embed an Inbox satisfy an interface that allows them to send messages to eachother. Messages are functions of 0 arguments, typically closures, and should not perform blocking operations. Message passing is asynchronous, causal, and fast. Actors implemented by the provided Inbox struct are scheduled to prevent messages queues from growing too large, by pausing at safe breakpoints when an Actor detects that it sent something to another Actor whose inbox is flooded.
Package tfortools provides a set of functions that are designed to make it easier for developers to add template based scripting to their command line tools. Command line tools written in Go often allow users to specify a template script to tailor the output of the tool to their specific needs. This can be useful both when visually inspecting the data and also when invoking command line tools in scripts. The best example of this is go list which allows users to pass a template script to extract interesting information about Go packages. For example, prints all the imports of the current package. The aim of this package is to make it easier for developers to add template scripting support to their tools and easier for users of these tools to extract the information they need. It does this by augmenting the templating language provided by the standard library package text/template in two ways: 1. It auto generates descriptions of the data structures passed as input to a template script for use in help messages. This ensures that help usage information is always up to date with the source code. 2. It provides a suite of convenience functions to make it easy for script writers to extract the data they need. There are functions for sorting, selecting rows and columns and generating nicely formatted tables. For example, if a program passed a slice of structs containing stock data to a template script, we could use the following script to extract the names of the 3 stocks with the highest trade volume. The output might look something like this: The functions head, sort, tables and col are provided by this package.
Package evalfilter allows running a user-supplied script against an object. We're constructed with a program, and internally we parse that to an abstract syntax-tree, then we walk that tree to generate a series of bytecodes. The bytecode is then executed via the VM-package. Example is a function which will filter a list of people, to return only those members who are above a particular age, via the use of a simple script.
Package textrank is an implementation of Text Rank algorithm in Go with extendable features (automatic summarization, phrase extraction). It supports multithreading by goroutines. The package is under The MIT Licence. If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is. - Find the most important phrases. - Find the most important words. - Find the most important N sentences. - Importance by phrase weights. - Importance by word occurrence. - Find the first N sentences, start from Xth sentence. - Find sentences by phrase chains ordered by position in text. - Access to the whole ranked data. - Support more languages. - Algorithm for weighting can be modified by interface implementation. - Parser can be modified by interface implementation. - Multi thread support. Find the most important phrases: This is the most basic and simplest usage of textrank. All possible pre-defined finder queries: After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph. After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure. Adding text continuously: It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data. Using different algorithm to ranking text: There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults. Using multiple graphs: Graph ID exists because it is possible run multiple independent text ranking processes. Using different non-English languages: Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here: https://github.com/stopwords-iso Asynchronous usage by goroutines: It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package log4go provides level-based and highly configurable logging. This is inspired by the logging functionality in Java. Essentially, you create a Logger object and create output filters for it. You can send whatever you want to the Logger, and it will filter that based on your settings and send it to the outputs. This way, you can put as much debug code in your program as you want, and when you're done you can filter out the mundane messages so only the important ones show up. Utility functions are provided to make life easier. Here is some example code to get started: log := log4go.NewLogger() log.AddFilter("stdout", log4go.DEBUG, log4go.NewConsoleLogWriter()) log.AddFilter("log", log4go.FINE, log4go.NewFileLogWriter("example.log", true)) log.Info("The time is now: %s", time.LocalTime().Format("15:04:05 MST 2006/01/02")) The first two lines can be combined with the utility NewDefaultLogger: log := log4go.NewDefaultLogger(log4go.DEBUG) log.AddFilter("log", log4go.FINE, log4go.NewFileLogWriter("example.log", true)) log.Info("The time is now: %s", time.LocalTime().Format("15:04:05 MST 2006/01/02")) Usage notes: Changes from 2.0: Future work: (please let me know if you think I should work on any of these particularly)
Rf refactors Go programs. Usage: Rf applies a script of refactoring commands to the package in the current directory. For example, to unexport a field in a struct by renaming it: By default, rf writes changes back to the disk. The -diff flag causes rf to print a diff of the intended changes instead. A script is a sequence of commands, one per line. Comments are introduced by # and extend to the end of the line. Commands may be broken across lines by ending all but the last with a trailing backslash (before any comment), as in: Commands that take { } blocks need not backslash-escape line breaks inside the braces, as in: Most commands take “code addresses” as arguments. Each code address identifies some code in a program. For illustration, consider this program, prog.go: The simplest code address is the name of a top-level declaration. In this program, those addresses are C, D, F, T, V, and VT. Adding .Name to an address selects a name within the earlier address, whether that's a function variable (F.who, F.msg), a struct field (T.Field), a method (T.M, T.P), or a variable's field (V.Value, V.thing, V.thing.Field, VT.Field). Another kind of code address is a Go source file, identified by a name ending in “.go”, as in “file.go”. If the file name contains a slash, as in “../dir/file.go”, the address identifies a file in the package in “../dir”. Another kind of code address is a Go package, identified by a path containing a slash but not ending in “.go”, as in “../dir” or “example.com/pkg”. A final kind of code address is a textual range in a function body or source file, identified by the syntax Ident:Range, where Ident identifies a file, function, or method and Range identifies a section of text within. The Range syntax is as used in the Acme and Sam text editors. The most common forms are the line range “N,M”, the byte range “#N,M”, and the regular expression range “/re1/,/re2/”. For example: See http://9p.io/sys/doc/sam/sam.html Table II for details on the syntax. The add command adds text to the source code. It takes as an argument the address after which the text should be added, followed by the text itself. The address may be a declaration, text range, file, or package. In all cases, the text is added immediately after the addressed location: after the declaration, after the text range, at the end of the file, or at the end of the first file in the package (considering the file names in lexical order). Examples: The cp command is like mv (see below) but doesn't delete the source and doesn't update any references. (UNIMPLEMENTED) The ex command applies rewrites based on example snippets. The arguments to ex are interpreted as Go code consisting of a sequence of imports and then a list of special “old -> new” rules. Each rule specifies that where ex finds a pattern matching old, it should replace the code with new. For example, to replace all log.Error calls with log.Panic: Declarations introduce typed pattern variables that can be used in rules. For example, to simplify certain needlessly complex uses of fmt.Sprintf: The inline command inlines uses of declared constants, functions, and types. Each use of the named declarations is replaced by the declaration's definition. If the -rm flag is given, inline removes the declarations as well. Examples: Given the declarations in the “Code addresses” section above, the first command replaces all uses of E with 2.718281828. The second replaces all uses of TAlias with T and then removes TAlias. UNIMPLEMENTED: Inlining of functions. The key command converts all struct literals for a list of types to keyed literals. Each address must identify a struct type. All literals of those struct types are updated to use the keyed form. Example: The mv command moves and renames code. When mv moves or renames old code, it also updates any references to use the new names or locations for the code. This includes updating other packages in the current module. In general, mv aims to act appropriately for any sensible combination of old and new address form. The rest of this section enumerates the specific cases that mv handles. item → renamed item Any named item can be renamed by specifying a destination that is the same code address with the final element changed. For example: In this form, the destination address must repeat the dot-separated elements leading up to the new name, changing only the final element. The repetition here distinguishes this form from other forms. var → var field A top-level variable can be moved to a new or existing field in a global variable of struct type. For example: method → func A method can be moved to a top-level function, removing the association with the receiver type. The receiver remains the first argument of the new function. For example: func → method A function can be moved to a method on the type of its first argument, assuming that type is defined in the same package where the function appears. For example: UNIMPLEMENTED. code text → new function A text range can be moved to a new function, leaving behind an appropriate call to that function. For example: code text → new method A text range can be moved to a new method, leaving behind an approriate call to that method. For example: TODO. UNIMPLEMENTED. declaration → file If the source is a top-level declaration (const, func, method, type, var) and the destination is a file, mv moves that declaration, along with any comments immediately preceding it, to the end of the destination file. For example: Naming a single item in a declaration block is taken to indicate wanting to move all items in the block. In the example from the “Code addresses” section, “mv C x.go” moves D as well. Any time a destination file must be created, mv initializes it with the header comments (those above the package declaration and any package doc) from the file the source code is being moved from. This heuristic is meant to copy header text like copyright notices. The file may be in a different package. As usual, mv updates references to the moved declaration to refer to its new location, inserting imports as needed. If the result is an import cycle, mv reports the cycle rather than attempt some kind of automatic (and likely wrong) fix. declaration → package If the source is a top-level declaration and the destination is a package, mv moves the declaration to the a file in the destination package with the same name as the one holding the source item. If the destination package does not exist, it will be created. file → file If the source and destination are both files, mv moves all code from the source file to the end of the destination file. For example: file → package If the source is a file and the destination is a package, mv moves all code from the source file to a file in the destination with the same base name. For example: package → file If the source is a package and the destination is a file, mv moves all code from the source package to the named file. For example: UNIMPLEMENTED. package → package If the source is a package and the destination is a file, mv moves all code from the source package to the destination package. from the source package to the named file. For example: UNIMPLEMENTED. many → file, many → package If the destination is a file or package, multiple sources can be listed. The mv command moves each source item in turn to the destination. The rm command removes code. Rm deletes the old code. All address forms are valid. When deleting a declaration, rm also deletes line comments immediately preceding it, up to a blank line. UNIMPLEMENTED: removal of struct fields, interface methods, text ranges. Rf is very very rough. Everything is subject to change, and it may break your programs.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.