Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
package nodb is a high performance embedded NoSQL. nodb supports various data structure like kv, list, hash and zset like redis. Other features include binlog replication, data with a limited time-to-live. First create a nodb instance before use: cfg is a Config instance which contains configuration for nodb use, like DataDir (root directory for nodb working to store data). After you create a nodb instance, you can select a DB to store you data: DB must be selected by a index, nodb supports only 16 databases, so the index range is [0-15]. KV is the most basic nodb type like any other key-value database. List is simply lists of values, sorted by insertion order. You can push or pop value on the list head (left) or tail (right). Hash is a map between fields and values. ZSet is a sorted collections of values. Every member of zset is associated with score, a int64 value which used to sort, from smallest to greatest score. Members are unique, but score may be same. nodb supports binlog, so you can sync binlog to another server for replication. If you want to open binlog support, set UseBinLog to true in config.
Package multistream implements a simple stream router for the multistream-select protocoli. The protocol is defined at https://github.com/multiformats/multistream-select
Package merry adds context to errors, including automatic stack capture, cause chains, HTTP status code, user messages, and arbitrary values. Wrapped errors work a lot like google's golang.org/x/net/context package: each wrapper error contains the inner error, a key, and a value. Like contexts, errors are immutable: adding a key/value to an error always creates a new error which wraps the original. This package comes with built-in support for adding information to errors: * stacktraces * changing the error message * HTTP status codes * End user error messages * causes You can also add your own additional information. The stack capturing feature can be turned off for better performance, though it's pretty fast. Benchmarks on an 2017 MacBook Pro, with go 1.10: This package contains functions for creating errors, or wrapping existing errors. To create: Additional context information can be attached to errors using functional options, called Wrappers: Errorf() also accepts wrappers, mixed in with the format args: Wrappers can be applied to existing errors with Wrap(): Wrap() will add a stacktrace to any error which doesn't already have one attached. WrapSkipping() can be used to control where the stacktrace starts. This package contains wrappers for adding specific context information to errors, such as an HTTPCode. You can create your own wrappers using the primitive Value(), WithValue(), and Set() functions. Errors produced by this package implement fmt.Formatter, to print additional information about the error: Details() prints the error message, all causes, the stacktrace, and additional error values configured with RegisterDetailFunc(). By default, it will show the HTTP status code and user message. By default, any error created by or wrapped by this package will automatically have a stacktrace captured and attached to the error. This capture only happens if the error doesn't already have a stack attached to it, so wrapping the error with additional context won't capture additional stacks. When and how stacks are captured can be customized. SetMaxStackDepth() can globally configure how many frames to capture. SetStackCaptureEnabled() can globally configure whether stacks are captured by default. Wrap(err, NoStackCapture()) can be used to selectively suppress stack capture for a particular error. Wrap(err, CaptureStack(false)) will capture a new stack at the Wrap call site, even if the err already had an earlier stack attached. The new stack overrides the older stack. Wrap(err, CaptureStack(true)) will force a stack capture at the call site even if stack capture is disabled globally. Finally, Wrappers are passed a depth argument so they know how deep they are in the call stack from the call site where this package's API was called. This allows Wrappers to implement their own stack capturing logic. The package contains functions for creating new errors with stacks, or adding a stack to `error` instances. Functions with add context (e.g. `WithValue()`) work on any `error`, and will automatically convert them to merry errors (with a stack) if necessary. AddHooks() can install wrappers which are applied to all errors processed by this package. Hooks are applied before any other wrappers or processing takes place. They can be used to integrate with errors from other packages, normalizing errors (such as applying standard status codes to application errors), localizing user messages, or replacing the stack capturing mechanism.
Package unleash is a client library for connecting to an Unleash feature toggle server. See https://github.com/Unleash/unleash for more information. The API is very simple. The main functions of interest are Initialize and IsEnabled. Calling Initialize will create a default client and if a listener is supplied, it will start the sync loop. Internally the client consists of two components. The first is the repository which runs in a separate Go routine and polls the server to get the latest feature toggles. Once the feature toggles are fetched, they are stored by sending the data to an instance of the Storage interface which is responsible for storing the data both in memory and also persisting it somewhere. The second component is the metrics component which is responsible for tracking how often features were queried and whether or not they were enabled. The metrics components also runs in a separate Go routine and will occasionally upload the latest metrics to the Unleash server. The client struct creates a set of channels that it passes to both of the above components and it uses those for communicating asynchronously. It is important to ensure that these channels get regularly drained to avoid blocking those Go routines. There are two ways this can be done. The first and perhaps simplest way to "drive" the synchronization loop in the client is to provide a type that implements one or more of the listener interfaces. There are 3 interfaces and you can choose which ones you should implement: If you are only interesting in tracking errors and warnings and don't care about any of the other signals, then you only need to implement the ErrorListener and pass this instance to WithListener(). The DebugListener shows an example of implementing all of the listeners in a single type. If you would prefer to have control over draining the channels yourself, then you must not call WithListener(). Instead, you should read all of the channels continuously inside a select. The WithInstance example shows how to do this. Note that all channels must be drained, even if you are not interested in the result. The following examples show how to use the client in different scenarios. ExampleCustomStrategy demonstrates using a custom strategy. ExampleFallbackFunc demonstrates how to specify a fallback function. ExampleSimpleUsage demonstrates the simplest way to use the unleash client. ExampleWithInstance demonstrates how to create the client manually instead of using the default client. It also shows how to run the event loop manually.
Package sparse provides implementations of selected sparse matrix formats. Matrices and linear algebra are used extensively in scientific computing and machine learning applications. Large datasets are analysed comprising vectors of numerical features that represent some object. The nature of feature encoding schemes, especially those like "one hot", tends to lead to vectors with mostly zero values for many of the features. In text mining applications, where features are typically terms from a vocabulary, it is not uncommon for 99% of the elements within these vectors to contain zero values. Sparse matrix formats take advantage of this fact to optimise memory usage and processing performance by only storing and processing non-zero values. Sparse matrix formats can broadly be divided into 3 main categories: 1. Creational - Sparse matrix formats suited to construction and building of matrices. Matrix formats in this category include DOK (Dictionary Of Keys) and COO (COOrdinate aka triplet). 2. Operational - Sparse matrix formats suited to arithmetic operations e.g. multiplication. Matrix formats in this category include CSR (Compressed Sparse Row aka CRS - Compressed Row Storage) and CSC (Compressed Sparse Column aka CCS - Compressed Column Storage) 3. Specialised - Specialised matrix formats suiting specific sparsity patterns. Matrix formats in this category include DIA (DIAgonal) for efficiently storing and manipulating symmetric diagonal matrices. A common practice is to construct sparse matrices using a creational format e.g. DOK or COO and then convert them to an operational format e.g. CSR for arithmetic operations. All sparse matrix implementations in this package implement the Matrix interface defined within the gonum/mat package and so may be used interchangeably with matrix types defined within the package e.g. mat.Dense, mat.VecDense, etc.
Package kustomize contains a selective set of Kustomize API types for use by GitOps Toolkit components. +kubebuilder:object:generate=true
Package registry uses the go-micro registry for selection
The gokogiri package provides a Go interface to the libxml2 library. It is inspired by the ruby-based Nokogiri API, and allows one to parse, manipulate, and create HTML and XML documents. Nodes can be selected using either CSS selectors (in much the same fashion as jQuery) or XPath 1.0 expressions, and a simple DOM-like inteface allows for building up documents from scratch.
Package SQLittle provides pure Go, read-only, access to SQLite (version 3) database files. SQLittle reads SQLite3 tables and indexes. It iterates over tables, and can search efficiently using indexes. SQLittle will deal with all SQLite storage quirks, but otherwise it doesn't try to be smart; if you want to use an index you have to give the name of the index. There is no support for SQL, and if you want to do the most efficient joins possible you'll have to use the low level code. Based on https://sqlite.org/fileformat2.html and some SQLite source code reading. This whole thing is mostly for fun. The normal SQLite libraries are perfectly great, and there is no real need for this. However, since this library is pure Go cross-compilation is much easier. Given the constraints a valid use-case would for example be storing app configuration in read-only sqlite files. https://godoc.org/github.com/alicebob/sqlittle for the go doc and examples. See [LOWLEVEL.md](LOWLEVEL.md) about the low level reader. See [CODE.md](CODE.md) for an overview how the code is structured. Things SQLittle can do: Things SQLittle should do: Things SQLittle can not do: SQLittle has a read-lock on the file during the whole execution of the select-like functions. It's safe to update the database using SQLite while the file is opened in SQLittle. The current level of abstraction is likely the final one (that is: deal with reading single tables; don't even try joins or SQL or query planning), but the API might still change.
Package shard implements the sharding call strategy
Package shard implements the sharding call strategy
Package shard implements the sharding call strategy
Package buffer contains buffer and wrapper types for byte slices. It is useful for writing lexers or other high-performance byte slice handling. The `Reader` and `Writer` types implement the `io.Reader` and `io.Writer` respectively and provide a thinner and faster interface than `bytes.Buffer`. The `Shifter` type is useful for building lexers because it keeps track of the start and end position of a byte selection, and shifts the bytes whenever a valid token is found. The `Lexer` is however an improved version of `Shifter`, allowing zero-copy for the parser by using a (kind of) ring buffer underneath.
Package roundrobin implements a roundrobin call strategy
Package merkletree is an implementation of a Merkle tree (https://en.wikipedia.org/wiki/Merkle_tree). It provides methods to create a tree and generate and verify proofs. The hashing algorithm for the tree is selectable between BLAKE2b and Keccak256, or you can supply your own. Creating a Merkle tree requires a list of values that are each byte arrays. Once a tree has been created proofs can be generated using the tree's GenerateProof() function. The package includes a function to verify a generated proof given only the data to prove, proof and the root hash of the relevant Merkle tree. This allows for efficient verification of proofs without requiring the entire Merkle tree to be stored or recreated. The tree pads its values to the next highest power of 2; values not supplied are treated as 0. This can be seen graphically by generating a DOT representation of the graph with DOT().
This example shows how to instrument sql queries in order to display the time that they consume package main import ( ) // Hooks satisfies the sqlhook.Hooks interface type Hooks struct {} // Before hook will print the query with it's args and return the context with the timestamp // After hook will get the timestamp registered on the Before hook and print the elapsed time /* Output should look like: > CREATE TABLE t (id INTEGER, text VARCHAR(16)) []. took: 121.238µs > INSERT into t (text) VALUES(?), (?) ["foo" "bar"]. took: 36.364µs > SELECT id, text FROM t []. took: 4.653µs */
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package yamled implements helpers for in-place editing of YAML sources. The editing is performed by a golang.org/x/text/transform.Transformer implementation configured with one or more editing operations. Editing operations are defined as string replacements over selections covering YAML nodes in the YAML source. Selections are constructed from *yaml.Node value that can be obtained by either manually navigating the YAML node tree or by using other packages like those provided by YAML JSONPointer or YAML JSONPath libraries.
Proteus /proʊtiəs/ is a tool to generate protocol buffers version 3 compatible `.proto` files from your Go structs, types and functions. The motivation behind this library is to use Go as a source of truth for your models instead of the other way around and then generating Go code from a `.proto` file, which does not generate idiomatic code. Proteus scans all the code in the selected packages and generates protobuf messages for every exported struct (and all the ones that are referenced in any other struct, even though they are not exported). The types that semantically are used as enumerations in Go are transformed into proper protobuf enumerations. All the exported functions and methods will be turned into protobuf RPC services.
Package godb is query builder and struct mapper. godb does not manage relationships like Active Record or Entity Framework, it's not a full-featured ORM. Its goal is to be more productive than manually doing mapping between Go structs and databases tables. godb needs adapters to use databases, some are packaged with godb for : Start with an adapter, and the Open method which returns a godb.DB pointer : There are three ways to executes SQL with godb : Using raw queries you can execute any SQL queries and get the results into a slice of structs (or single struct) using the automatic mapping. Structs tools looks more 'orm-ish' as they're take instances of objects or slices to run select, insert, update and delete. Statements tools stand between raw queries and structs tools. It's easier to use than raw queries, but are limited to simpler cases. The statements tools are based on types : Example : The SelectStatement type could also build a query using columns from a structs. It facilitates the build of queries returning values from multiple table (or views). See struct mapping explanations, in particular the `rel` part. Example : The structs tools are based on types : Examples : Raw queries are executed using the RawSQL type. The query could be a simple hand-written string, or something complex builded using SQLBuffer and Conditions. Example : Stucts contents are mapped to databases columns with tags, like in previous example with the Book struct. The tag is 'db' and its content is : For autoincrement identifier simple use both 'key' and 'auto'. Example : More than one field could have the 'key' keyword, but with most databases drivers none of them could have the 'auto' keyword, because executing an insert query only returns one value : the last inserted id : https://golang.org/pkg/database/sql/driver/#RowsAffected.LastInsertId . With PostgreSQL you cas have multiple fields with 'key' and 'auto' options. Structs could be nested. A nested struct is mapped only if has the 'db' tag. The tag value is a columns prefix applied to all fields columns of the struct. The prefix is not mandatory, a blank string is allowed (no prefix). A nested struct could also have an optionnal `rel` attribute of the form `rel=relationname`. It's useful to build a select query using multiples relations (table, view, ...). See the example using the BooksWithInventories type. Example Databases columns are : The mapping is managed by the 'dbreflect' subpackage. Normally its direct use is not necessary, except in one case : some structs are scannable and have to be considered like fields, and mapped to databases columns. Common case are time.Time, or sql.NullString, ... You can register a custom struct with the `RegisterScannableStruct` and a struct instance, for example the time.Time is registered like this : The structs statements use the struct name as table name. But you can override this simply by simplementing a TableName method : Statements and structs tools manage 'where' and 'group by' sql clauses. These conditional clauses are build either with raw sql code, or build with the Condition struct like this : WhereQ methods take a Condition instance build by godb.Q . Where mathods take raw SQL, but is just a syntactic sugar. These calls are equivalents : Multiple calls to Where or WhereQ are allowed, these calls are equivalents : Slices are managed in a particular way : a single placeholder is replaced with multiple ones. This allows code like : The SQLBuffer exists to ease the build of complex raw queries. It's also used internaly by godb. Its use and purpose are simple : concatenate sql parts (accompagned by their arguments) in an efficient way. Example : For all databases, structs updates and deletes manage optimistic locking when a dedicated integer row is present. Simply tags it with `oplock` : When an update or delete operation fails, Do() returns the `ErrOpLock` error. With PostgreSQL and SQL Server, godb manages optimistic locking with automatic fields. Just add a dedicated field in the struct and tag it with `auto,oplock`. With PostgreSQL you can use the `xmin` system column like this : For more informations about `xmin` see https://www.postgresql.org/docs/10/static/ddl-system-columns.html With SQL Server you can use a `rowversion` field with the `mssql.Rowversion` type like this : For more informations about the `rowversion` data type see https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql godb keep track of time consumed while executing queries. You can reset it and get the time consumed since Open or the previous reset : You can log all executed queried and details of condumed time. Simply add a logger : godb takes advantage of PostgreSQL RETURNING clause, and SQL Server OUTPUT clause. With statements tools you have to add a RETURNING clause with the Suffix method and call DoWithReturning method instead of Do(). It's optionnal. With StructInsert it's transparent, the RETURNING or OUTPUT clause is added for all 'auto' columns and it's managed for you. One of the big advantage is with BulkInsert : for others databases the rows are inserted but the new keys are unkonwns. With PostgreSQL and SQL Server the slice is updated for all inserted rows. It also enables optimistic locking with *automatic* columns. godb has two prepared statements caches, one to use during transactions, and one to use outside of a transaction. Both use a LRU algorithm. The transaction cache is enabled by default, but not the other. A transaction (sql.Tx) isn't shared between goroutines, using prepared statement with it has a predictable behavious. But without transaction a prepared statement could have to be reprepared on a different connection if needed, leading to unpredictable performances in high concurrency scenario. Enabling the non transaction cache could improve performances with single goroutine batch. With multiple goroutines accessing the same database : it depends ! A benchmark would be wise. Using statements tools and structs tools you can execute select queries and get an iterator instead of filling a slice of struct instances. This could be useful if the request's result is big and you don't want to allocate too much memory. On the other side you will write almost as much code as with the `sql` package, but with an automatic struct mapping, and a request builder. Iterators are also available with raw queries. In this cas you cas executes any kind of sql code, not just select queries. To get an interator simply use the `DoWithIterator` method instead of `Do`. The iterator usage is similar to the standard `sql.Rows` type. Don't forget to check that there are no errors with the `Err` method, and don't forget to call `Close` when the iterator is no longer useful, especially if you don't scan all the resultset. To avoid performance cost godb.DB does not implement synchronization. So a given instance of godb.DB should not be used by multiple goroutines. But a godb.DB instance can be created and used as a blueprint and cloned for each goroutine. See Clone and Clear methods. A typical use case is a web server. When the application starts a godb.DB is created, and cloned in each http handler with Clone, and ressources are to be freed calling Clear (use defer statement).
Package registry uses the go-micro registry for selection
Package roundrobin implements a roundrobin call strategy
Package protocompile provides the entry point for a high performance native Go protobuf compiler. "Compile" in this case just means parsing and validating source and generating fully-linked descriptors in the end. Unlike the protoc command-line tool, this package does not try to use the descriptors to perform code generation. The various sub-packages represent the various compile phases and contain models for the intermediate results. Those phases follow: This package provides an easy-to-use interface that does all the relevant phases, based on the inputs given. If an input is provided as source, all phases apply. If an input is provided as a descriptor proto, only phases 3 to 5 apply. Nothing is necessary if provided a linked descriptor (which is usually only the case for select system dependencies). This package is also capable of taking advantage of multiple CPU cores, so a compilation involving thousands of files can be done very quickly by compiling things in parallel. A Resolver is how the compiler locates artifacts that are inputs to the compilation. For example, it can load protobuf source code that must be processed. A Resolver could also supply some already-compiled dependencies as fully-linked descriptors, alleviating the need to re-compile them. A Resolver can provide any of the following in response to a query for an input. Compilation will use the Resolver to load the files that are to be compiled and also to load all dependencies (i.e. other files imported by those being compiled). A Compiler accepts a list of file names and produces the list of descriptors. A Compiler has several fields that control how it works but only the Resolver field is required. A minimal Compiler, that resolves files by loading them from the file system based on the current working directory, can be had with the following simple snippet: This minimal Compiler will use default parallelism, equal to the number of CPU cores detected; it will not generate source code info in the resulting descriptors; and it will fail fast at the first sign of any error. All of these aspects can be customized by setting other fields.
2fa is a two-factor authentication agent. Usage: “2fa -add name” adds a new key to the 2fa keychain with the given name. It prints a prompt to standard error and reads a two-factor key from standard input. Two-factor keys are short case-insensitive strings of letters A-Z and digits 2-7. By default the new key generates time-based (TOTP) authentication codes; the -hotp flag makes the new key generate counter-based (HOTP) codes instead. By default the new key generates 6-digit codes; the -7 and -8 flags select 7- and 8-digit codes instead. “2fa -list” lists the names of all the keys in the keychain. “2fa name” prints a two-factor authentication code from the key with the given name. If “-clip” is specified, 2fa also copies the code to the system clipboard. With no arguments, 2fa prints two-factor authentication codes from all known time-based keys. The default time-based authentication codes are derived from a hash of the key and the current time, so it is important that the system clock have at least one-minute accuracy. The keychain is stored unencrypted in the text file $HOME/.2fa. During GitHub 2FA setup, at the “Scan this barcode with your app” step, click the “enter this text code instead” link. A window pops up showing “your two-factor secret,” a short string of letters and digits. Add it to 2fa under the name github, typing the secret at the prompt: Then whenever GitHub prompts for a 2FA code, run 2fa to obtain one: Or to type less:
Package workmail provides the API client, operations, and parameter types for Amazon WorkMail. WorkMail is a secure, managed business email and calendaring service with support for existing desktop and mobile email clients. You can access your email, contacts, and calendars using Microsoft Outlook, your browser, or other native iOS and Android email applications. You can integrate WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location in which your data is stored. The WorkMail API is designed for the following scenarios: Listing and describing organizations Managing users Managing groups Managing resources All WorkMail API operations are Amazon-authenticated and certificate-signed. They not only require the use of the AWS SDK, but also allow for the exclusive use of AWS Identity and Access Management users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the WorkMail site, the IAM user gains full administrative visibility into the entire WorkMail organization (or as set in the IAM policy). This includes, but is not limited to, the ability to create, update, and delete users, groups, and resources. This allows developers to perform the scenarios listed above, as well as give users the ability to grant access on a selective basis using the IAM model.
Package tcpinfo implements encoding and decoding of TCP-level socket options regarding connection information. The Transmission Control Protocol (TCP) is defined in RFC 793. TCP Selective Acknowledgment Options is defined in RFC 2018. Management Information Base for the Transmission Control Protocol (TCP) is defined in RFC 4022. TCP Congestion Control is defined in RFC 5681. Computing TCP's Retransmission Timer is described in RFC 6298. TCP Options and Maximum Segment Size (MSS) is defined in RFC 6691. Shared Use of Experimental TCP Options is defined in RFC 6994. TCP Extensions for High Performance is defined in RFC 7323. NOTE: Older Linux kernels may not support extended TCP statistics described in RFC 4898.
Package workdocs provides the API client, operations, and parameter types for Amazon WorkDocs. The Amazon WorkDocs API is designed for the following use cases: File Migration: File migration applications are supported for users who want to migrate their files from an on-premises or off-premises file system or service. Users can insert files into a user directory structure, as well as allow for basic metadata changes, such as modifications to the permissions of files. Security: Support security applications are supported for users who have additional security needs, such as antivirus or data loss prevention. The API actions, along with CloudTrail, allow these applications to detect when changes occur in Amazon WorkDocs. Then, the application can take the necessary actions and replace the target file. If the target file violates the policy, the application can also choose to email the user. eDiscovery/Analytics: General administrative applications are supported, such as eDiscovery and analytics. These applications can choose to mimic or record the actions in an Amazon WorkDocs site, along with CloudTrail, to replicate data for eDiscovery, backup, or analytical applications. All Amazon WorkDocs API actions are Amazon authenticated and certificate-signed. They not only require the use of the Amazon Web Services SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model. The pricing for Amazon WorkDocs APIs varies depending on the API call type for these actions: READ (Get*) WRITE (Activate*, Add*, Create*, Deactivate*, Initiate*, Update*) LIST (Describe*) DELETE*, CANCEL For information about Amazon WorkDocs API pricing, see Amazon WorkDocs Pricing.
Package ccgo translates C to Go source code. This v3 package is obsolete. Please use current ccgo/v4: Invocation 2021-12-23: v3.13.0 add clang support. To compile the resulting Go programs the package modernc.org/libc has to be installed. CCGO_CPP selects which command is used by the C front end to obtain target configuration. Defaults to `cpp`. Ignored when --load-config <path> is used. TARGET_GOARCH selects the GOARCH of the resulting Go code. Defaults to $GOARCH or runtime.GOARCH if $GOARCH is not set. Ignored when --load-config <path> is used. TARGET_GOOS selects the GOOS of the resulting Go code. Defaults to $GOOS or runtime.GOOS if $GOOS is not set. Ignored when --load-config <path> is used. To compile for the host invoke something like To cross compile set TARGET_GOARCH and/or TARGET_GOOS, not GOARCH/GOOS. Cross compile depends on availability of C stdlib headers for the target platform as well on the set of predefined macros for the target platform. For example, to cross compile on a Linux host, targeting windows/amd64, it's necessary to have mingw64 installed in $PATH. Then invoke something like Only files with extension .c, .h or .json are recognized as input files. A .json file is interpreted as a compile database. All other command line arguments following the .json file are interpreted as items that should be found in the database and included in the output file. Each item should be on object file (.o) or static archive (.a) or a command (no extension). Command line options requiring an argument. -Dfoo Equals `#define foo 1`. -Dfoo=bar Equals `#define foo bar`. -Ipath Add path to the list of include files search path. The option is a capital letter I (India), not a lowercase letter l (Lima). -limport-path The package at <import-path> must have been produced without using the -nocapi option, ie. the package must have a proper capi_$GOOS_$GOARCH.go file. The option is a lowercase letter l (Lima), not a capital letter I (India). -Ufoo Equals `#undef foo`. -compiledb name When this option appears anywhere, most preceding options are ignored and all following command line arguments are interpreted as a command with arguments that will be executed to produce the compilation database. For example: This will execute `make -DFOO -w` and attempts to extract the compile and archive commands. Only POSIX operating systems are supported. The supported build system must output information about entering directories that is compatible with GNU make. The only compilers supported are `gcc` and `clang`. The only archiver supported is `ar`. Format specification: https://clang.llvm.org/docs/JSONCompilationDatabase.html Note: This option produces also information about libraries created with `ar cr` and include it in the json file, which is above the specification. -crt-import-path path Unless disabled by the -nostdlib option, every produced Go file imports the C runtime library. Default is `modernc.org/libc`. -export-defines "" Export C numeric/string defines as Go constants by capitalizing the first letter of the define's name. -export-defines prefix Export C numeric/string defines as Go constants by prefixing the define's name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-enums "" Export C enum constants as Go constants by capitalizing the first letter of the enum constant name. -export-enums prefix Export C enum constants as Go constants by prefixing the enum constant name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-externs "" Export C extern definitions as Go definitions by capitalizing the first letter of the definition name. -export-externs prefix Export C extern definitions as Go definitions by prefixing the definition name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-fields "" Export C struct fields as Go fields by capitalizing the first letter of the field name. -export-fields prefix Export C struct fields as Go fields by prefixing the field name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-structs "" Export tagged C struct/union types as Go types by capitalizing the first letter of the tag name. -export-structs prefix Export tagged C struct/union types as Go types by prefixing the tag name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-typedefs "" Export C typedefs as Go types by capitalizing the first letter of the typedef name. -export-structs prefix Export C typedefs as as Go types by prefixing the typedef name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -static-locals-prefix prefix Prefix C static local declarators names with 'prefix'. -host-config-cmd command This option has the same effect as setting `CCGO_CPP=command`. -host-config-opts comma-separated-list The separated items of the list are added to the invocation of the configuration command. -pkgname name Set the resulting Go package name to 'name'. Defaults to `main`. -script filename Ccgo does not yet have a concept of object files. All C files that are needed for producing the resulting Go file have to be compiled together and "linked" in memory. There are some problems with this approach, one of them is the situation when foo.c has to be compiled using, for example `-Dbar=42` and "linked" with baz.c that needs to be compiled with `-Dbar=314`. Or `bar` must not defined at all for baz.c, etc. A script in a named file is a CSV file. It is opened like this (error handling omitted): The first field of every record in the CSV file is the directory to use. The remaining fields are the arguments of the ccgo command. This way different C files can be translated using different options. The CSV file may look something like: -volatile comma-separated-list The separated items of the list are added to the list of file scope extern variables the will be accessed atomically, like if their C declarator used the 'volatile' type specifier. Currently only C scalar types of size 4 and 8 bytes are supported. Other types/sizes will ignore both the volatile specifier and the -volatile option. -save-config path This option copies every header included during compilation or compile database generation to a file under the path argument. Additionally the host configuration, ie. predefined macros, include search paths, os and architecture is stored in path/config.json. When this option is used, no Go code is generated, meaning no link phase occurs and thus the memory consumption should stay low. Passing an empty string as an argument of -save-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. This option is ignored when -compiledb <path> is used. --load-config path Note that this option must have the double dash prefix to distinguish it from -lfoo, the [traditional] short form of `-l foo`. This option configures the compiler using path/config.json. The include paths are adjusted to be relative to path. For example: Assume on machine A the default C preprocessor reports a system include search path "/usr/include". Running ccgo on A with -save-config /tmp/foo to compile foo.c that #includes <stdlib.h>, which is found in /usr/include/stdlib.h on the host results in Assume /tmp/foo from machine A will be recursively copied to machine B, that may run a different operating system and/or architecture. Let the copy be for example in /tmp/bar. Using --load-config /tmp/bar will instruct ccgo to configure its preprocessor with a system include path /tmp/bar/usr/include and thus use the original machine A stdlib.h found there. When the --load-config is used, no host configuration from a machine B cross C preprocessor/compiler is needed to transpile the foo.c source on machine B as if the compiler would be running on machine A. The particular usefulness of this mechanism is for transpiling big projects for 32 bit architectures. There the lack if ccgo having an object format and thus linking everything in RAM can need too much memory for the system to handle. The way around this is possibly to run something like on machine A, transfer path/* to machine B and run the link phase there with eg. Note that the C sources for the project must be in the same path on both machines because the compile database stores absolute paths. It might be convenient to put the sources in path/src, the config in path/config, for example, and transfer the [archive of] path/ to the same directory on the second machine. That also solves the issue when ./configure generates files and the result differs per operating system or architecture. Passing an empty string as an argument of -load-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. These command line options don't take arguments. -E When this option is present the compiler does not produce any Go files and instead prints the preprocessor output to stdout. -all-errors Normally only the first 10 or so errors are shown. With this option the compiler will show all errors. -header Using this option suppresses producing of any function definitions. This is possibly useful for producing Go files from C header files. Including function signatures with -header. -func-sig Add this option to include fucntion signature when compiling headers (using -header). -nostdinc This option disables the default C include search paths. -nostdlib This option disables importing of the runtime library by the resulting Go code. -trace-pinning This option will print the positions and names of local declarators that are being pinned. -version Ignore all other options, print version and exit. -verbose-compiledb Enable verbose output when -compiledb is present. -ignore-undefined This option tells the linker to not insist on finding definitions for declarators that are not implicitly declared and used - but not defined. This might be useful when the intent is to define the missing function in Go functions manually. Name conflict resolution for such declarator names may or may not be applied. -ignore-unsupported-alignment This option tells the compiler to not complain about alignments that Go cannot support. -trace-included-files This option outputs the path names of all included files. This option is ignored when -compiledb <path> is used. There may exist other options not listed above. Those should be considered temporary and/or unsupported and may be removed without notice. Alternatively, they may eventually get promoted to "documented" options.
Package smapping is Library for collecting various operations on struct and its mapping to interface{} and/or map[string]interface{} type. Implemented to ease the conversion between Golang struct and json format together with ease of mapping selections using different part of field tagging. The implementation is abstraction on top reflection package, reflect. The snippet code below will be used accross example for brevity
Package securecookie encodes and decodes authenticated and optionally encrypted cookie values. Secure cookies can't be forged, because their values are validated using HMAC. When encrypted, the content is also inaccessible to malicious eyes. To use it, first create a new SecureCookie instance: The hashKey is required, used to authenticate the cookie value using HMAC. It is recommended to use a key with 32 or 64 bytes. The blockKey is optional, used to encrypt the cookie value -- set it to nil to not use encryption. If set, the length must correspond to the block size of the encryption algorithm. For AES, used by default, valid lengths are 16, 24, or 32 bytes to select AES-128, AES-192, or AES-256. Strong keys can be created using the convenience function GenerateRandomKey(). Once a SecureCookie instance is set, use it to encode a cookie value: Later, use the same SecureCookie instance to decode and validate a cookie value: We stored a map[string]string, but secure cookies can hold any value that can be encoded using encoding/gob. To store custom types, they must be registered first using gob.Register(). For basic types this is not needed; it works out of the box.
Package amqp is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the sychronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.Publish or Channel.Consume. SSL/TLS - Secure connections When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate dial your own connection. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html This exports a Session object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and `rabbitmq-server` in another. Stop & restart RabbitMQ to see how the queue reacts.
Package sgr providers formatters to write ANSI escape sequences that can format and colorize text. The exact appearance of text, and whether or not various characteristics are even supported, will depend on the terminal capabilities. In particular, writing to a regular file will suppress all ANSI escapes sequences. Package sgr will default to no-color when there is doubt about whether or not the output stream can support ANSI sequences. There are some convenience methods to print common options, such as bold or red text. To get full control over the color and formatting of some text, first create a Style, and then use Formatter.Style. Calls to functions and methods in this package do not directly result in any io. To get the colorized text, the return values of certain methods must be printed using package fmt. For methods simply taking a string, such as Formatter.Bold, the returned value must be printed using either the verb %v or %s. All of the options for %s are supported, including width, precision, and flags. For methods that take a format string plus arguments, such as Formatter.Boldf, the returned value must still be printed using either %v or %s. However, no options are supported. To control options such as width and precision, use the format string passed to Formatter.Boldf Selection of the correct formatter depends on several environment variables. Primarily, package sgr checks the value of TERM to see if any color space attributes are present. ANSI compatibility is always assumed, but the color depth will be adjusted. However, if TERM is equal to 'dumb', then all formatting will be suppressed. Some terminals use COLORTERM to advertise 24-bit capability. If COLORTERM is equal to 'truecolor' or '24bit', then the formatter will support a 24-bit color depth. Finally, if the environment variable NO_COLOR is present, no matter its value, the color depth will be 1-bit. This does not disable all formatting, so text attributes such as bold and italic are still possible, but all colors will be suppressed.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Command gcli generates a skeleon (codes and its directory structure) you need to start building CLI tool by Golang. https://github.com/tcnksm/gcli Usage: Available commands: Use "gcli <command> -help" for more information about command. Apply design template file for generating cli project. You can generate design template file via 'gcli design' command. If framework name is not specified gcli use codegangsta/cli. You can set framework name via '-F' option. To check cli framework you can use, run 'gcli list'. Usage: Options: Generate project design template (as toml file). You can pass that file to 'gcli apply' command and generate CLI tool based on template file. You can define what command and what flag you need on that file. Usage: Options: Show all avairable cli frameworks. Usage: Generate new cli skeleton project. At least, you must provide executable name. You can select cli package and set commands via command line option. See more about that on Options section. By default, gcli use codegangsta/cli. To check cli framework you can use, run 'gcli list'. Usage: Options: Examples: To create todo command application skeleton which has 'add' and 'delete' command, Validate design template file which has required filed. If not it returns error and non zero value. Usage:
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/clarify/rested/storers/mongo) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax, you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/clarify/rested/blob/master/README.md for full REST Layer documentation.
Package with implementation of methods and structures for work with Tarantool instance. Example demonstrates how to use custom (un)packing with typed selects and function calls. You can specify user-defined packing/unpacking functions for your types. This allows you to store complex structures within a tuple and may speed up your requests. Alternatively, you can just instruct the msgpack library to encode your structure as an array. This is safe "magic". It is easier to implement than a custom packer/unpacker, but it will work slower.
Package dynsampler contains several sampling algorithms to help you select a representative set of events instead of a full stream. This package is intended to help sample a stream of tracking events, where events are typically created in response to a stream of traffic (for the purposes of logging or debugging). In general, sampling is used to reduce the total volume of events necessary to represent the stream of traffic in a meaningful way. For the purposes of these examples, the "traffic" will be a set of HTTP requests being handled by a server, and "event" will be a blob of metadata about a given HTTP request that might be useful to keep track of later. A "sample rate" of 100 means that for every 100 requests, we capture a single event and indicate that it represents 100 similar requests. Use the `Sampler` interface in your code. Each different sampling algorithm implements the Sampler interface. The following guidelines can help you choose a sampler. Depending on the shape of your traffic, one may serve better than another, or you may need to write a new one! Please consider contributing it back to this package if you do. * If your system has a completely homogeneous stream of requests: use `Static` to use a constant sample rate. * If your system has a steady stream of requests and a well-known low cardinality partition key (e.g. http status): use `Static` and override sample rates on a per-key basis (e.g. if you know want to sample `HTTP 200/OK` events at a different rate from `HTTP 503/Server Error`). * If your logging system has a strict cap on the rate it can receive events, use `TotalThroughput`, which will calculate sample rates based on keeping *the entire system's* representative event throughput right around (or under) particular cap. * If your system has a rough cap on the rate it can receive events and your partitioned keyspace is fairly steady, use `PerKeyThroughput`, which will calculate sample rates based on keeping the event throughput roughly constant *per key/partition* (e.g. per user id) * The best choice for a system with a large key space and a large disparity between the highest volume and lowest volume keys is `AvgSampleRateWithMin` - it will increase the sample rate of higher volume traffic proportionally to the logarithm of the specific key's volume. If total traffic falls below a configured minimum, it stops sampling to avoid any sampling when the traffic is too low to warrant it. * `EMASampleRate` works like `AvgSampleRate`, but calculates sample rates based on a moving average (Exponential Moving Average) of many measurement intervals rather than a single isolated interval. In addition, it can detect large bursts in traffic and will trigger a recalculation of sample rates before the regular interval. Each sampler implementation below has additional configuration parameters and a detailed description of how it chooses a sample rate. Some implementations implement `SaveState` and `LoadState` - enabling you to serialize the Sampler's internal state and load it back. This is useful, for example, if you want to avoid losing calculated sample rates between process restarts.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices