Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/xhbase/enmime
CoinPrices is a utility to fetch cryptocurrency prices. Usage: By default, the prices are displayed in USD and the number of cryptocurrencies is limited to 10. Valid currencies are: USD, AUD, BRL, BGN, CAD, CHF, CLP, CNY, CZK, DKK, EUR, GBP, HKD, HUF, IDR, ILS, INR, JPY, KRW, MXN, MYR, NOK, NZD, PHP, PKR, PLN, RUB, SEK, SGD, THB, TRY, TWD, ZAR. The data is obtained from [CoinMarketCap API](https://coinmarketcap.com/api/) and the quotes are refreshed every 5 minutes.
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
CoinPrices is a utility to fetch cryptocurrency prices. Usage: By default, the prices are displayed in USD and the number of cryptocurrencies is limited to 10. Valid currencies are: USD, AUD, BRL, BGN, CAD, CHF, CLP, CNY, CZK, DKK, EUR, GBP, HKD, HUF, IDR, ILS, INR, JPY, KRW, MXN, MYR, NOK, NZD, PHP, PKR, PLN, RUB, SEK, SGD, THB, TRY, TWD, ZAR. The data is obtained from [CoinMarketCap API](https://coinmarketcap.com/api/) and the quotes are refreshed every 5 minutes.
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Implement SASL / SCRAM-SHA-256 authentication. See RFC 5802, "Salted Challenge Response Authentication Mechanism (SCRAM) SASL and GSS-API Mechanisms". See PostgreSQL docs for how SASL is implemented in the PostgreSQL FE/BE protocol: Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/example/listen.
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package logr defines a general-purpose logging API and abstract interfaces to back that API. Packages in the Go ecosystem can depend on this package, while callers can implement logging with whatever backend is appropriate. Logging is done using a Logger instance. Logger is a concrete type with methods, which defers the actual logging to a LogSink interface. The main methods of Logger are Info() and Error(). Arguments to Info() and Error() are key/value pairs rather than printf-style formatted strings, emphasizing "structured logging". With Go's standard log package, we might write: With logr's structured logging, we'd write: Errors are much the same. Instead of: We'd write: Info() and Error() are very similar, but they are separate methods so that LogSink implementations can choose to do things like attach additional information (such as stack traces) on calls to Error(). Error() messages are always logged, regardless of the current verbosity. If there is no error instance available, passing nil is valid. Often we want to log information only when the application in "verbose mode". To write log lines that are more verbose, Logger has a V() method. The higher the V-level of a log line, the less critical it is considered. Log-lines with V-levels that are not enabled (as per the LogSink) will not be written. Level V(0) is the default, and logger.V(0).Info() has the same meaning as logger.Info(). Negative V-levels have the same meaning as V(0). Error messages do not have a verbosity level and are always logged. Where we might have written: We can write: Logger instances can have name strings so that all messages logged through that instance have additional context. For example, you might want to add a subsystem name: The WithName() method returns a new Logger, which can be passed to constructors or other functions for further use. Repeated use of WithName() will accumulate name "segments". These name segments will be joined in some way by the LogSink implementation. It is strongly recommended that name segments contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Logger instances can store any number of key/value pairs, which will be logged alongside all messages logged through that instance. For example, you might want to create a Logger instance per managed object: With the standard log package, we might write: With logr we'd write: Logger has very few hard rules, with the goal that LogSink implementations might have a lot of freedom to differentiate. There are, however, some things to consider. The log message consists of a constant message attached to the log line. This should generally be a simple description of what's occurring, and should never be a format string. Variable information can then be attached using named values. Keys are arbitrary strings, but should generally be constant values. Values may be any Go value, but how the value is formatted is determined by the LogSink implementation. Logger instances are meant to be passed around by value. Code that receives such a value can call its methods without having to check whether the instance is ready for use. Calling methods with the null logger (Logger{}) as instance will crash because it has no LogSink. Therefore this null logger should never be passed around. For cases where passing a logger is optional, a pointer to Logger should be used. Keys are not strictly required to conform to any specification or regex, but it is recommended that they: These guidelines help ensure that log data is processed properly regardless of the log implementation. For example, log implementations will try to output JSON data or will store data for later database (e.g. SQL) queries. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when necessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values). Implementations may choose to give callers access to the underlying logging implementation. The recommended pattern for this is: Logger grants access to the sink to enable type assertions like this: Custom `With*` functions can be implemented by copying the complete Logger struct and replacing the sink in the copy: Don't use New to construct a new Logger with a LogSink retrieved from an existing Logger. Source code attribution might not work correctly and unexported fields in Logger get lost. Beware that the same LogSink instance may be shared by different logger instances. Calling functions that modify the LogSink will affect all of those.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality oparation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
sqlbind is a set of database/sql helpers to remove a lot of boilerplate code while always using standard database/sql calls. Basic usage, using maps : Automatic IN clause expansion : Variable args : Structs, using tags to define DB field names : Add args to a struct (e.g. from query string parameters) : Named placeholders are automatically translated to the right driver-dependant placeholder : ? for MySQL (default style) or $N for PostgreSQL Colons inside quotes are ignored and do not need to be escaped (":foo" will neither be rewritten neither considered a named parameter), but otherwise need to be doubled (::foo will be rewritten to :foo but not be considered a named parameter). Not all fields need to be expanded by ::names and ::name=::value. This can be achieved using an optional parameter to sqlbin.Named : or using struct tags (better performance) : Additional variables can be added to SQL queries : Braces inside quotes are ignored : "{value}" will not be modified. In a REST API, PATCH update calls may update only certain fields. When using structs with plain types, it is impossible to differentiate between empty fields {"name":""}, null fields : {"name": null} and missing fields : {}. Using pointers, one can differentiate between empty fields and null/missing fields, but not between null and missing fields. In this case, nil values are usually considered missing. sqlbind will never expand nil pointer values in ::names and ::name=::value. https://github.com/jfbus/jsontypes defines types that will be able to manage various cases : null values, missing JSON fields, zero/empty values, read-only values. All types are automatically converted from/to their underlying type when reading/writing to the database. * jsontypes.NullString will either be expanded to "" ("" in JSON), NULL (null in JSON) or not expanded (absent from JSON). * jsontypes.String will either be expanded to "" ("" or null in JSON) or not expanded (absent from JSON) * jsontypes.ROString will never be expanded More generally, all structs that implement `Missing() bool` will be managed by sqlbind. Result struct binding QueryRow does not expose column names, Query+ScanRow can be used instead. Slices of structs are not mapped, only structs. You can build a SQLBinder instance :
`shlike` implements a flexible configuration file format, based on POSIX `/bin/sh` word splitting, quoting, and expansion. See the README.md file (https://github.com/3ofcoins/shlike/) for description of the configuration syntax, and GoDoc (https://godoc.org/pkg/github.com/3ofcoins/shlike/) for API details.
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Package stock allows to use yahoo finance api to get historical quotes data It uses YQL pointing to yahoo.finance.historicaldata to get the necessary info
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/iredmail/enmime
Package logr defines abstract interfaces for logging. Packages can depend on these interfaces and callers can implement logging in whatever way is appropriate. This design derives from Dave Cheney's blog: This is a BETA grade API. Until there is a significant 2nd implementation, I don't really know how it will change. The logging specifically makes it non-trivial to use format strings, to encourage attaching structured information instead of unstructured format strings. Logging is done using a Logger. Loggers can have name prefixes and named values attached, so that all log messages logged with that Logger have some base context associated. The term "key" is used to refer to the name associated with a particular value, to disambiguate it from the general Logger name. For instance, suppose we're trying to reconcile the state of an object, and we want to log that we've made some decision. With the traditional log package, we might write With logr's structured logging, we'd write Depending on our logging implementation, we could then make logging decisions based on field values (like only logging such events for objects in a certain namespace), or copy the structured information into a structured log store. For logging errors, Logger has a method called Error. Suppose we wanted to log an error while reconciling. With the traditional log package, we might write With logr, we'd instead write This functions similarly to: However, it ensures that a standard key for the error value ("error") is used across all error logging. Furthermore, certain implementations may choose to attach additional information (such as stack traces) on calls to Error, so it's preferred to use Error to log errors. Each log message from a Logger has four types of context: logger name, log verbosity, log message, and the named values. The Logger name constists of a series of name "segments" added by successive calls to WithName. These name segments will be joined in some way by the underlying implementation. It is strongly reccomended that name segements contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Log verbosity represents how little a log matters. Level zero, the default, matters most. Increasing levels matter less and less. Try to avoid lots of different verbosity levels, and instead provide useful keys, logger names, and log messages for users to filter on. It's illegal to pass a log level below zero. The log message consists of a constant message attached to the the log line. This should generally be a simple description of what's occuring, and should never be a format string. Variable information can then be attached using named values (key/value pairs). Keys are arbitrary strings, while values may be any Go value. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when neccessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values).
Photo-album-showcase fetches photos from the Photos API (https://jsonplaceholder.typicode.com/photos). Results can be filtered by album IDs, IDs, titles, URLs, and/or thumbnail URLs. Usage: The -id flag filters photos by one or more photo IDs. Multiple IDs must be comma-separated. The -albumid flag filters photos by one or more album IDs. Multiple album IDs must be comma-separated. The -title flag filters photos by one or more titles. Multiple titles must be comma-separated. Titles containing spaces must be enclosed with single quotes (e.g., `-title 'title 1','title 2'`). The -url flag filters photos by one or more URLs. Multiple URLs must be comma-separated. The -thumburl flag filters photos by one or more thumbnail URLs. Multiple thumbnail URLs must be comma-separated. Examples: Fetch photos with IDs 1, 2, and 5 from album 1: Fetch photos with titles 'accusamus beatae ad facilis cum similique qui sunt' and 'reprehenderit est deserunt velit ipsam': Fetch photos with URLs https://via.placeholder.com/600/92c952 and https://via.placeholder.com/600/f66b97: Fetch photos with thumbnail URLs https://via.placeholder.com/150/92c952 and https://via.placeholder.com/150/f66b97:
Package client provides a Go SDK for interacting with the Market Data API. It includes functionality for making API requests, handling responses, managing rate limits, and logging. The SDK supports various data types including stocks, options, and market status information. To use the SDK, you first need to create an instance of MarketDataClient using the GetClient function. This client will then be used to make API requests to the Market Data API. The SDK uses an API token for authentication. The token can be set as an environment variable (MARKETDATA_TOKEN) or directly in your code. However, storing tokens in your code is not recommended for security reasons. The MarketDataClient automatically tracks and manages the API's rate limits. You can check if the rate limit has been exceeded with the RateLimitExceeded method. The SDK logs all unsuccessful (400-499 and 500-599) responses to specific log files based on the response status code. Successful responses (200-299) are logged when debug mode is enabled. Logs include detailed information such as request and response headers, response status code, and the response body. Debug mode can be enabled by calling the Debug method on the MarketDataClient instance. When enabled, the SDK will log detailed information about each request and response, which is useful for troubleshooting. Package client provides a Go SDK for interacting with the Market Data API. The Market Data Go Client includes functionality for making API requests, handling responses, managing rate limits, and logging. The SDK supports various data types including stocks, options, indices, and market status information. The MarketDataClient is a singleton client that is used to interact with the Market Data API. If you are using an environment variable to store your token, the client will be intialized automatically when the package is imported. If you are not using an environment variable, you can initialize the client manually using the NewClient method. After you initialize it for the first time, it is not neccessary to reinitialize it again. The client is thread-safe and can be used across multiple goroutines. Do not attempt to create multiple instances of the client. Package client includes types and methods to access the Funds / Candles endpoint. Retrieve historical price candles for any supported stock symbol. Use FundCandlesRequest to make requests to the endpoint using any of the three supported execution methods: Package client includes types and methods to access the Index Candles endpoint. Get historical price candles for any supported stock index. Use IndicesCandlesRequest to make requests to the endpoint using any of the three supported execution methods: Package client includes types and methods to access the Index Quotes endpoint. Retrieve real-time quotes for any supported index. Utilize IndexQuoteRequest to make requests to the endpoint through one of the three supported execution methods: Package client provides a collection of structs for logging. The Market Data Go SDK provides a comprehensive logging framework tailored for HTTP request and response tracking. It facilitates detailed logging of HTTP interactions, including request details, response data, error handling, and rate limit monitoring. The logging functionality is designed to support a wide range of market data types such as stocks, options, indices, and market status information, making it an essential tool for developers integrating market data into their applications. Package client includes types and methods to access the Market Status endpoint. Retrieve current, future, and historical open / closed status information. Utilize MarketStatusRequest to make requests to the endpoint through one of the three supported execution methods: Package client provides functionalities to interact with the Options Chain endpoint. Retrieve a a complete or filtered options chain for a given underlying symbol. Both real-time and historical requests are possible. Utilize OptionChainRequest for querying the endpoint through one of the three available methods: Package client provides functionalities to interact with the Options Expirations endpoint. Get a list of current or historical option expiration dates for an underlying symbol. Utilize OptionsExpirationsRequest for querying the endpoint through one of the three available methods: Package client provides functionalities to interact with the Options Lookup endpoint. Lookup the official option symbol based on a user's text input. Utilize [OptionsLookupRequest] for querying the endpoint through one of the three available methods: Package client provides functionalities to interact with the Options Quotes endpoint. Retrieve an option quote for a given option symbol. Utilize OptionQuoteRequest for querying the endpoint through one of the three available methods: Package client provides functionalities to interact with the Option Strikes endpoint. Retrieve a complete or filtered list of option strike prices for a given underlying symbol. Both real-time and historical requests are possible. Utilize OptionStrikesRequest for querying the endpoint through one of the three available methods: Package client provides functionalities to interact with the Bulk Stock Candles endpoint. Get bulk historical price data for many stock symbols at once or even obtain a full market snapshot. Utilize the BulkStockCandlesRequest for querying the endpoint through one of the three available methods: Package client includes types and methods to access the Bulk Stock Quotes endpoint. Retrieve live quotes for multiple stocks symbols in a single request or even get a full market snapshot with a quote for all symbols. Utilize BulkStockQuotesRequest to make requests to the endpoint through one of the three supported execution methods: Package client includes types and methods to access the Stock Candles endpoint. Retrieve historical price candles for any supported stock symbol. Use StockCandlesRequest to make requests to the endpoint using any of the three supported execution methods: Package client includes types and methods to access the Stock Earnings endpoint. Retrieve earnings data for any supported stock symbol. Use StockEarningsRequest to make requests to the endpoint using any of the three supported execution methods: Package client includes types and methods to access the Stock News endpoint. Retrieve news articles for any supported stock symbol. Use StockNewsRequest to make requests to the endpoint using any of the three supported execution methods: Package client includes types and methods to access the Stock Quotes endpoint. Retrieve live quotes for any supported stock symbol. Utilize StockQuoteRequest to make requests to the endpoint through one of the three supported execution methods: Package stocks provides the /stocks endpoints
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package pot (Pieces of Text) provides a lightweight text serialization parser that is intended to be used together with Go's encoding.TextUnmarshaler interface. There is only de-serialization support, it's easy enough to generate properly formated POT directly from a encoding.Textmarshaler. The POT format is similar to JSON but with some differences required by the application that POT was created for. There are two compound types, a dictionary and a list and two string types. The dictionary type may hold duplicate keys and the key order is maintained. This makes it more of an itemized list than a dictionary type. The list type simply holds a sequence of other types. There are no numeric or boolean types, all parsing eventually produces strings. It's up to an applications TextUnmarshaler functions to parse these strings. Strings are separated by space, strings may contain space if quoted or escaped. The characters that may be used for keys in dictionaries are artificially limited similar to variable names in most programming languages. The characters a-z, A-Z, 0-9 are allowed in any position, the character '-' is allowed in any position but the first. Dictionary keys are delimited by values by ':'. Dictionaries: Lists: Strings: The escape character is '\'. Characters '{', '}', '[', ']', ':', ' ' must be quoted or escaped in strings. Characters '\' and '"' must be escaped in strings. Additionally '\n' produces a new-line, '\r' a carriage return and '\t' a tab. Create a new root level parser and call parser.Next() until it returns nil or an error. There is also ParserScanner type that wraps a parser interface to provide a bufio.Scanner like API Example:
Package enmime implements a MIME parsing library for Go. It's built on top of Go's included mime/multipart support, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. Each Part implements io.Reader, providing access to the content it represents. If the part was encoded in quoted-printable or base64, it is decoded prior to being accessed by the Reader. If you need to locate a particular Part, you can pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. EnvelopeFromMessage returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be downconverted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package errors provides detailed stack-traced description for error type and exception(handle-check) style handling This package has two goals. 1. Provide convenient error type which has error code to easily distinguish errors, which can be extended by additional messages(annotations here) while error flows through the program, can be printed with stack trace. 2. Provide convenient way to handle errors in exception style. With the first point everything is clear, let's talk a little about second. Go's error handling idiom usually looks like It is very "holy war" question which way to handle errors, exception like or "returning err" like, is better. Everyone chooses himself which way is more appropriate to him. But doubtlessly "returning err" style has some disadvantages. 1. Error handling logic is mixed with "business" logic so it shadows it 2. Code repetitions, which can cause a bug 3. Verbosity That's why Go 2 draft https://go.googlesource.com/proposal/+/master/design/go2draft.md contains new Handle-Check error handling approach, which is quite similar to exception approach. This package provides similar to this Handle-Check mechanism, based on panics. Details are below =) All errors created by this package are returned as golang native error interface. Underlying type is immutable, so one can't change smth in already existing error, but can use wrap functions, which create new one based on underlying. New, Errorf, Codef - creates new error. Wrap, WrapAnnotated - wrap existing. Error can be created with error code, which has interface{} type. Error code can be obtained from error by ErrCode function. Missing error code in creation is equal to nil error code. When one wraps error without providing error code, it is preserved. Otherwise - overridden. With WrapAnnotated one can add additional messages to an error. Annotations are printed in stacktrace, bound to corresponding function(where it was added). If annotation was added out of functions in stacktrace, it will be printed in separate section. %v modifier will print full info, with stack trace and annotations. %s only error message. %q only error message quoted. E.g. with %v: To get underlying error use Cause func. All creation & wrapping functions have their duplicates with _skipstack suffix and skip argument. With those functions one can specify correct first stack frame to print in stack trace. skip==0 means starting from caller of this function. It can be useful for example to skip stack frames of wrapper object, if you want your own wrapper of this package. example: This is comprehensive, pretending to be close to real-life example of using this package. It's easier to see it in code, but if you use godoc, please, notice https://godoc.org/github.com/pashaosipyants/errors/example_auxiliary package, which is used here. Imagine there is an api to create a task that executes another service. Besides user of this api wants to hold info whether this task is already done. This api, ofc, can return error. E.g. certain task may be already created. If so, error should report whether it is done or not.
Package money implements immutable amounts and exchange rates. Currency is represented as an integer index into an in-memory array that stores properties defined by ISO 4217: The currently supported currencies use scales of 0, 2, or 3: Amount is a struct with two fields: ExchangeRate is a struct with three fields: The range of an amount is determined by the scale of its currency. Similarly, the range of an exchange rate is determined by the scale of its quote currency. Here are the ranges for scales 0, 2, and 3: Subnormal numbers are not supported by the underlying decimal.Decimal type. Consequently, amounts and exchange rates between -0.00000000000000000005 and 0.00000000000000000005 inclusive are rounded to 0. The package provides methods for converting: See the documentation for each method for more details. Each arithmetic operation is carried out in two steps: The operation is initially performed using uint64 arithmetic. If no overflow occurs, the exact result is immediately returned. If an overflow does occur, the operation proceeds to step 2. The operation is repeated with increased precision using big.Int arithmetic. The result is then rounded to 19 digits. If no significant digits are lost during rounding, the inexact result is returned. If any significant digit is lost, an overflow error is returned. Step 1 was introduced to improve performance by avoiding heap allocation for big.Int and the complexities associated with big.Int arithmetic. It is expected that, in transactional financial systems, the majority of arithmetic operations will successfully compute an exact result during step 1. The following rules are used to determine the significance of digits during step 2: Amount.Add, Amount.Sub, Amount.SubAbs, Amount.Mul, Amount.FMA, Amount.Quo, Amount.QuoRem, ExchangeRate.Conv, ExchangeRate.Mul, ExchangeRate.Inv: All digits in the integer part are significant. In the fractional part, digits are significant up to the scale of the currency. Amount.Rat: All digits in the integer part are significant, while digits in the fractional part are considered insignificant. Implicit rounding is applied when a result exceeds 19 digits. In such cases, the result is rounded to 19 digits using half-to-even rounding. This method ensures that rounding errors are evenly distributed between rounding up and rounding down. For all arithmetic operations, the result is the one that would be obtained by computing the exact mathematical result with infinite precision and then rounding it to 19 digits. In addition to implicit rounding, the package provides several methods for explicit rounding: See the documentation for each method for more details. All methods are panic-free and pure. Errors are returned in the following cases: Currency Mismatch. All arithmetic operations, except for Amount.Rat, return an error if the operands are denominated in different currencies. Division by Zero. Unlike the standard library, Amount.Quo, Amount.QuoRem, Amount.Rat, and ExchangeRate.Inv do not panic when dividing by 0. Instead, they return an error. Overflow. Unlike standard integers, there is no "wrap around" for amounts at certain sizes. Arithmetic operations return an error for out-of-range values. Underflow. All arithmetic operations, except for ExchangeRate.Inv and ExchangeRate.Mul, do not return an error in case of underflow. If the result is an amount between -0.00000000000000000005 and 0.00000000000000000005 inclusive, it is rounded to 0. ExchangeRate.Inv and ExchangeRate.Mul return an error in cases of underflow, as the result of these operations is an exchange rate, and exchange rates cannot be 0. This example calculates the effective interest rate for a 10% nominal interest rate compounded monthly on a USD 10,000 balance. In this example, a loan amortization table is generated for a loan with an initial amount of USD 12,000, an annual interest rate of 10%, and a repayment period of 1 year. In this example, we parse the string "840D000000001234", which represents -12.34 USD, according to the specification for "DE54, Additional Amounts" in ISO 8583. This is an example of how to a parse a monetary amount formatted as money.proto. This is an example of how to a parse a monetary amount formatted according to Stripe API specification. In this example, the sales tax amount is calculated for a product with a given price after tax, using a specified tax rate.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Command pigeon generates parsers in Go from a PEG grammar. From Wikipedia [0]: Its features and syntax are inspired by the PEG.js project [1], while the implementation is loosely based on [2]. Formal presentation of the PEG theory by Bryan Ford is also an important reference [3]. An introductory blog post can be found at [4]. The pigeon tool must be called with PEG input as defined by the accepted PEG syntax below. The grammar may be provided by a file or read from stdin. The generated parser is written to stdout by default. The following options can be specified: The tool makes no attempt to format the code, nor to detect the required imports. It is recommended to use goimports to properly generate the output code: The goimports tool can be installed with: If the code blocks in the grammar (see below, section "Code block") are golint- and go vet-compliant, then the resulting generated code will also be golint- and go vet-compliant. The generated code doesn't use any third-party dependency unless code blocks in the grammar require such a dependency. The accepted syntax for the grammar is formally defined in the grammar/pigeon.peg file, using the PEG syntax. What follows is an informal description of this syntax. Identifiers, whitespace, comments and literals follow the same notation as the Go language, as defined in the language specification (http://golang.org/ref/spec#Source_code_representation): The grammar must be Unicode text encoded in UTF-8. New lines are identified by the \n character (U+000A). Space (U+0020), horizontal tabs (U+0009) and carriage returns (U+000D) are considered whitespace and are ignored except to separate tokens. A PEG grammar consists of a set of rules. A rule is an identifier followed by a rule definition operator and an expression. An optional display name - a string literal used in error messages instead of the rule identifier - can be specified after the rule identifier. E.g.: The rule definition operator can be any one of those: A rule is defined by an expression. The following sections describe the various expression types. Expressions can be grouped by using parentheses, and a rule can be referenced by its identifier in place of an expression. The choice expression is a list of expressions that will be tested in the order they are defined. The first one that matches will be used. Expressions are separated by the forward slash character "/". E.g.: Because the first match is used, it is important to think about the order of expressions. For example, in this rule, "<=" would never be used because the "<" expression comes first: The sequence expression is a list of expressions that must all match in that same order for the sequence expression to be considered a match. Expressions are separated by whitespace. E.g.: A labeled expression consists of an identifier followed by a colon ":" and an expression. A labeled expression introduces a variable named with the label that can be referenced in the code blocks in the same scope. The variable will have the value of the expression that follows the colon. E.g.: The variable is typed as an empty interface, and the underlying type depends on the following: For terminals (character and string literals, character classes and the any matcher), the value is []byte. E.g.: For predicates (& and !), the value is always nil. E.g.: For a sequence, the value is a slice of empty interfaces, one for each expression value in the sequence. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a repetition (+ and *), the value is a slice of empty interfaces, one for each repetition. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a choice expression, the value is that of the matching choice. E.g.: For the optional expression (?), the value is nil or the value of the expression. E.g.: Of course, the type of the value can be anything once an action code block is used. E.g.: An expression prefixed with the ampersand "&" is the "and" predicate expression: it is considered a match if the following expression is a match, but it does not consume any input. An expression prefixed with the exclamation point "!" is the "not" predicate expression: it is considered a match if the following expression is not a match, but it does not consume any input. E.g.: The expression following the & and ! operators can be a code block. In that case, the code block must return a bool and an error. The operator's semantic is the same, & is a match if the code block returns true, ! is a match if the code block returns false. The code block has access to any labeled value defined in its scope. E.g.: An expression followed by "*", "?" or "+" is a match if the expression occurs zero or more times ("*"), zero or one time "?" or one or more times ("+") respectively. The match is greedy, it will match as many times as possible. E.g. A literal matcher tries to match the input against a single character or a string literal. The literal may be a single-quoted single character, a double-quoted string or a backtick-quoted raw string. The same rules as in Go apply regarding the allowed characters and escapes. The literal may be followed by a lowercase "i" (outside the ending quote) to indicate that the match is case-insensitive. E.g.: A character class matcher tries to match the input against a class of characters inside square brackets "[...]". Inside the brackets, characters represent themselves and the same escapes as in string literals are available, except that the single- and double-quote escape is not valid, instead the closing square bracket "]" must be escaped to be used. Character ranges can be specified using the "[a-z]" notation. Unicode classes can be specified using the "[\pL]" notation, where L is a single-letter Unicode class of characters, or using the "[\p{Class}]" notation where Class is a valid Unicode class (e.g. "Latin"). As for string literals, a lowercase "i" may follow the matcher (outside the ending square bracket) to indicate that the match is case-insensitive. A "^" as first character inside the square brackets indicates that the match is inverted (it is a match if the input does not match the character class matcher). E.g.: The any matcher is represented by the dot ".". It matches any character except the end of file, thus the "!." expression is used to indicate "match the end of file". E.g.: Code blocks can be added to generate custom Go code. There are three kinds of code blocks: the initializer, the action and the predicate. All code blocks appear inside curly braces "{...}". The initializer must appear first in the grammar, before any rule. It is copied as-is (minus the wrapping curly braces) at the top of the generated parser. It may contain function declarations, types, variables, etc. just like any Go file. Every symbol declared here will be available to all other code blocks. Although the initializer is optional in a valid grammar, it is usually required to generate a valid Go source code file (for the package clause). E.g.: Action code blocks are code blocks declared after an expression in a rule. Those code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as interface{}) and must return two values, the first being the value of the expression (an interface{}), and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: Predicate code blocks are code blocks declared immediately after the and "&" or the not "!" operators. Like action code blocks, predicate code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as interface{}) and must return two values, the first being a bool and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: The current type is a struct that provides two useful fields that can be accessed in action and predicate code blocks: "pos" and "text". The "pos" field indicates the current position of the parser in the source input. It is itself a struct with three fields: "line", "col" and "offset". Line is a 1-based line number, col is a 1-based column number that counts runes from the start of the line, and offset is a 0-based byte offset. The "text" field is the slice of bytes of the current match. It is empty in a predicate code block. The parser generated by pigeon exports a few symbols so that it can be used as a package with public functions to parse input text. The exported API is: See the godoc page of the generated parser for the test/predicates grammar for an example documentation page of the exported API: http://godoc.org/github.com/mna/pigeon/test/predicates. Like the grammar used to generate the parser, the input text must be UTF-8-encoded Unicode. The start rule of the parser is the first rule in the PEG grammar used to generate the parser. A call to any of the Parse* functions returns the value generated by executing the grammar on the provided input text, and an optional error. Typically, the grammar should generate some kind of abstract syntax tree (AST), but for simple grammars it may evaluate the result immediately, such as in the examples/calculator example. There are no constraints imposed on the author of the grammar, it can return whatever is needed. When the parser returns a non-nil error, the error is always of type errList, which is defined as a slice of errors ([]error). Each error in the list is of type *parserError. This is a struct that has an "Inner" field that can be used to access the original error. So if a code block returns some well-known error like: The original error can be accessed this way: By defaut the parser will continue after an error is returned and will cumulate all errors found during parsing. If the grammar reaches a point where it shouldn't continue, a panic statement can be used to terminate parsing. The panic will be caught at the top-level of the Parse* call and will be converted into a *parserError like any error, and an errList will still be returned to the caller. The divide by zero error in the examples/calculator grammar leverages this feature (no special code is needed to handle division by zero, if it happens, the runtime panics and it is recovered and returned as a parsing error). Providing good error reporting in a parser is not a trivial task. Part of it is provided by the pigeon tool, by offering features such as filename, position and rule name in the error message, but an important part of good error reporting needs to be done by the grammar author. For example, many programming languages use double-quotes for string literals. Usually, if the opening quote is found, the closing quote is expected, and if none is found, there won't be any other rule that will match, there's no need to backtrack and try other choices, an error should be added to the list and the match should be consumed. In order to do this, the grammar can look something like this: This is just one example, but it illustrates the idea that error reporting needs to be thought out when designing the grammar. Generated parsers have user-provided code mixed with pigeon code in the same package, so there is no package boundary in the resulting code to prevent access to unexported symbols. What is meant to be implementation details in pigeon is also available to user code - which doesn't mean it should be used. For this reason, it is important to precisely define what is intended to be the supported API of pigeon, the parts that will be stable in future versions. The "stability" of the API attempts to make a similar guarantee as the Go 1 compatibility [5]. The following lists what part of the current pigeon code falls under that guarantee (features may be added in the future): The pigeon command-line flags and arguments: those will not be removed and will maintain the same semantics. The explicitly exported API generated by pigeon. See [6] for the documentation of this API on a generated parser. The PEG syntax, as documented above. The code blocks (except the initializer) will always be generated as methods on the *current type, and this type is guaranteed to have the fields pos (type position) and text (type []byte). There are no guarantees on other fields and methods of this type. The position type will always have the fields line, col and offset, all defined as int. There are no guarantees on other fields and methods of this type. The type of the error value returned by the Parse* functions, when not nil, will always be errList defined as a []error. There are no guarantees on methods of this type, other than the fact it implements the error interface. Individual errors in the errList will always be of type *parserError, and this type is guaranteed to have an Inner field that contains the original error value. There are no guarantees on other fields and methods of this type. References:
Package ward provides tooling to connect to the Ward Trucking API. This is for truck shipments, not small parcels. Think LTL (less than truckload) shipments. This code was created off the Ward API documentation. This uses and XML SOAP API. You will need to have a Ward account and register for access to use this. Currently this package can perform: - pickup requests - rate quotes To create a pickup request: - Set test or production mode (SetProductionMode()). - Set shipper information (ShipperInfomation{}). - Set shipment data (PickupRequestShipment{}). - Create the pickup request object (PickupRequest{}). - Request the pickup (RequestPickup()). - Check for any errors. To get a rate quote: - Create the item you want a quote on (RateQuoteDetailItem{}). - Create the inner request with details (RateQuoteRequestInner{}). - Create the rate quote request (RateQuoteRequest{}). - Request the rate quote (RateQuote()). - Check for any errors.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/mtringuyen/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/sschekotikhin/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime