Package yptr is a JSONPointer implementation that can walk though a yaml.Node tree. yaml.Nodes preserve comments and locations in the source and can be useful to implement editing in-place functionality that uses JSONPointer to locate the fields to be edited. It also implements a simple extension to the JSONPointers standard that handles pointers into k8s manifests which usually contain arrays whose elements are objects with a field that uniquely specifies the array entry (e.g. "name"). For example, given a JSON/YAML input document: If "k" is a field that contains a key that uniquiely identifies an element in a given array, we can select the node with the scalar 42 by first selecting the array element for which "k" has the value of "x", and then by walking to the field "v": The "~" token accepts an argument which is interpreted as JSON value to be used as "query-by-example" filter against elements of an array. The array element is selected if the query-by-example object is a (recursive) subset of the element. The ~{...} extension can potentially locate multiple matches. For example, "~{}" effectively acts as a wildcard. This library offers an API to retrieve multiple matches ("FindAll") or to fetch only one match and error if multiple matches are found ("Find"). JSONPointer is designed to locate exactly one node in the tree. This can be achieved only if the effective schema of the JSON/YAML document mandates that there is an identifying key in each array element you want to point to. Using the "Find" function effectively performs a dynamic check of that invariant.
bíogo is a bioinformatics library for the Go language. It is a work in progress. bíogo stems from the need to address the size and structure of modern genomic and metagenomic data sets. These properties enforce requirements on the libraries and languages used for analysis: In addition to the computational burden of massive data set sizes in modern genomics there is an increasing need for complex pipelines to resolve questions in tightening problem space and also a developing need to be able to develop new algorithms to allow novel approaches to interesting questions. These issues suggest the need for a simplicity in syntax to facilitate: Related to the second issue is the reluctance of some researchers to release code because of quality concerns http://www.nature.com/news/2010/101013/full/467753a.html The issue of code release is the first of the principles formalised in the Science Code Manifesto http://sciencecodemanifesto.org/ A language with a simple, yet expressive, syntax should facilitate development of higher quality code and thus help reduce this barrier to research code release. It seems that nearly every language has it own bioinformatics library, some of which are very mature, for example BioPerl and BioPython. Why add another one? The different libraries excel in different fields, acting as scripting glue for applications in a pipeline (much of [1-3]) and interacting with external hosts [1, 2, 4, 5], wrapping lower level high performance languages with more user friendly syntax [1-4] or providing bioinformatics functions for high performance languages [5, 6]. The intended niche for bíogo lies somewhere between the scripting libraries and high performance language libraries in being easy to use for both small and large projects while having reasonable performance with computationally intensive tasks. The intent is to reduce the level of investment required to develop new research software for computationally intensive tasks. The bíogo library structure is influenced both by the structure of BioPerl and the Go core libraries. The coding style should be aligned with normal Go idioms as represented in the Go core libraries. Position numbering in the bíogo library conforms to the zero-based indexing of Go and range indexing conforms to Go's half-open zero-based slice indexing. This is at odds with the 'normal' inclusive indexing used by molecular biologists. This choice was made to avoid inconsistent indexing spaces being used — one-based inclusive for bíogo functions and methods and zero-based for native Go slices and arrays — and so avoid errors that this would otherwise facilitate. Note that the GFF package does allow, and defaults to, one-based inclusive indexing in its input and output of GFF files. Quality scores are supported for all sequence types, including protein. Phred and Solexa scoring systems are able to be read from files, however internal representation of quality scores is with Phred, so there will be precision loss in conversion. A Solexa quality score type is provided for use where this will be a problem. Copyright ©2011-2012 The bíogo Authors except where otherwise noted. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
This package provides bindings to the generic input event interface in Linux. The evdev interface serves the purpose of passing events generated in the kernel directly to userspace through character devices that are typically located in /dev/input/. Please refer to the godoc examples and the bin/evtest example program.
Package naughtystrings is a collection of strings that have a high probability of causing issues when used as user input.
Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.
Package ajson implements decoding of JSON as defined in RFC 7159 without predefined mapping to a struct of golang, with support of JSONPath. All JSON structs reflects to a custom struct of Node, witch can be presented by it type and value. Method Unmarshal will scan all the byte slice to create a root node of JSON structure, with all it behaviors. Each Node has it's own type and calculated value, which will be calculated on demand. Calculated value saves in atomic.Value, so it's thread safe. Method JSONPath will returns slice of founded elements in current JSON data, by it's JSONPath. JSONPath selection described at http://goessner.net/articles/JsonPath/ JSONPath expressions always refer to a JSON structure in the same way as XPath expression are used in combination with an XML document. Since a JSON structure is usually anonymous and doesn't necessarily have a "root member object" JSONPath assumes the abstract name $ assigned to the outer level object. JSONPath expressions can use the dot–notation or the bracket–notation for input pathes. Internal or output pathes will always be converted to the more general bracket–notation. JSONPath allows the wildcard symbol * for member names and array indices. It borrows the descendant operator '..' from E4X and the array slice syntax proposal [start:end:step] from ECMASCRIPT 4. Expressions of the underlying scripting language (<expr>) can be used as an alternative to explicit names or indices as in using the symbol '@' for the current object. Filter expressions are supported via the syntax ?(<boolean expr>) as in Here is a complete overview and a side by side comparison of the JSONPath syntax elements with its XPath counterparts. Package has several predefined constants. You are free to add new one with AddConstant Package has several predefined operators. You are free to add new one with AddOperator Operator precedence: https://golang.org/ref/spec#Operator_precedence Arithmetic operators: https://golang.org/ref/spec#Arithmetic_operators Package has several predefined functions. You are free to add new one with AddFunction
Package hujson contains a parser and packer for the JWCC format: JSON With Commas and Comments (or "human JSON"). JWCC is an extension of standard JSON (as defined in RFC 8259) in order to make it more suitable for humans and configuration files. In particular, it supports line comments (e.g., //...), block comments (e.g., /*...*/), and trailing commas after the last member or element in a JSON object or array. See https://nigeltao.github.io/blog/2021/json-with-commas-comments.html The Parse function parses HuJSON input as a Value, which is a syntax tree exactly representing the input. Comments and whitespace are represented using the Extra type. Composite types in JSON are represented using the Object and Array types. Primitive types in JSON are represented using the Literal type. The Value.Pack method serializes the syntax tree as raw output, which is byte-for-byte identical to the input if no transformations were performed on the value. A HuJSON value can be transformed using the Minimize, Standardize, Format, or Patch methods. Each of these methods mutate the value in place. Call the Clone method beforehand in order to preserve the original value. The Minimize and Standardize methods coerces HuJSON into standard JSON. The Format method formats the value; it is similar to `go fmt`, but instead for the HuJSON and standard JSON format. The Patch method applies a JSON Patch (RFC 6902) to the receiving value. The changes to the JSON grammar are: This package operates with HuJSON as an AST. In order to parse HuJSON into arbitrary Go types, use this package to parse HuJSON input as an AST, strip the AST of any HuJSON-specific lexicographical elements, and then pack the AST as a standard JSON output. Example usage:
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The primary focus is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents. The package makes use of the Gonum (http://http//www.gonum.org/) library for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn (http://scikit-learn.org/stable/) and Gensim(https://radimrehurek.com/gensim/) The primary intended use case is to support document input as text strings encoded as a matrix of numerical feature vectors called a `term document matrix`. Each column in the matrix corresponds to a document in the corpus and each row corresponds to a unique term occurring in the corpus. The individual elements within the matrix contain the frequency with which each term occurs within each document (referred to as `term frequency`). Whilst textual data from document corpora are the primary intended use case, the algorithms can be used with other types of data from other sources once encoded (vectorised) into a suitable matrix e.g. image data, sound data, users/products, etc. These matrices can be processed and manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. Typically the algorithms in this package implement one of three primary interfaces: One of the implementations of Vectoriser is Pipeline which can be used to wire together pipelines composed of a Vectoriser and one or more Transformers arranged in serial so that the output from each stage forms the input of the next. This can be used to construct a classic LSI (Latent Semantic Indexing) pipeline (vectoriser -> TF.IDF weighting -> Truncated SVD): Whilst they take different inputs, both Vectorisers and Transformers have 3 primary methods:
Package readahead will do asynchronous read-ahead from an input io.Reader and make the data available as an io.Reader. This should be fully transparent, except that once an error has been returned from the Reader, it will not recover. The readahead object also fulfills the io.WriterTo interface, which is likely to speed up copies. Package home: https://github.com/klauspost/readahead
Package redpanda is the SDK for Redpanda's inline Data Transforms, based on WebAssembly. This library provides a framework for transforming records written within Redpanda from an input to an output topic. This example shows the basic usage of the package: This is a "transform" that does nothing but copies the same data to an new topic. This example shows a filter that uses a regexp to filter records from one topic into another. The filter can be determined when the transform is deployed by using environment variables to specify the pattern. This example shows a transform that converts CSV into JSON.
Package cmds helps building both standalone and client-server applications. The basic building blocks are requests, commands, emitters and responses. A command consists of a description of the parameters and a function. The function is passed the request as well as an emitter as arguments. It does operations on the inputs and sends the results to the user by emitting them. There are a number of emitters in this package and subpackages, but the user is free to create their own. A command is a struct containing the commands help text, a description of the arguments and options, the command's processing function and a type to let the caller know what type will be emitted. Optionally one of the functions PostRun and Encoder may be defined that consumes the function's emitted values and generates a visual representation for e.g. the terminal. Encoders work on a value-by-value basis, while PostRun operates on the value stream. An emitter has the Emit method, that takes the command's function's output as an argument and passes it to the user. The command's function does not know what kind of emitter it works with, so the same function may run locally or on a server, using an rpc interface. Emitters can also send errors using the SetError method. The user-facing emitter usually is the cli emitter. Values emitter here will be printed to the terminal using either the Encoders or the PostRun function. A response is a value that the user can read emitted values from. Responses have a method Next() that returns the next emitted value and an error value. If the last element has been received, the returned error value is io.EOF. If the application code has sent an error using SetError, the error ErrRcvdError is returned on next, indicating that the caller should call Error(). Depending on the reponse type, other errors may also occur. Pipes are pairs (emitter, response), such that a value emitted on the emitter can be received in the response value. Most builtin emitters are "pipe" emitters. The most prominent examples are the channel pipe and the http pipe. The channel pipe is backed by a channel. The only error value returned by the response is io.EOF, which happens when the channel is closed. The http pipe is backed by an http connection. The response can also return other errors, e.g. if there are errors on the network. To get a better idea of what's going on, take a look at the examples at https://github.com/ipfs/go-ipfs-cmds/tree/master/examples.
Package minimock is a command line tool that parses the input Go source file that contains an interface declaration and generates implementation of this interface that can be used as a mock. 1. It's integrated with the standard Go "testing" package 2. It supports variadic methods and embedded interfaces 3. It's very convenient to use generated mocks in table tests because it implements builder pattern to set up several mocks 4. It provides a useful Wait(time.Duration) helper to test concurrent code 5. It generates helpers to check if the mocked methods have been called and keeps your tests clean and up to date 6. It generates concurrent-safe mock execution counters that you can use in your mocks to implement sophisticated mocks behaviour Let's say we have the following interface declaration in github.com/gojuno/minimock/tests package: Here is how to generate the mock for this interface: The result file ./tests/formatter_mock_test.go will contain the following code: Setting up a mock using direct assignment: Setting up a mock using builder pattern and Return method: Setting up a mock using builder and Set method: Builder pattern is convenient when you have to mock more than one method of an interface. Let's say we have an io.ReadCloser interface which has two methods: Read and Close Then you can set up a mock using just one assignment: You can also use invocation counters in your mocks and tests: minimock.Controller When you have to mock multiple dependencies in your test it's recommended to use minimock.Controller and its Finish or Wait methods. All you have to do is instantiate the Controller and pass it as an argument to the mocks' constructors: Every mock is registered in the controller so by calling mc.Finish() you can verify that all the registered mocks have been called within your test. Sometimes we write tons of mocks for our tests but over time the tested code stops using mocked dependencies, however mocks are still present and being initialized in the test files. So while tested code can shrink, tests are only growing. To prevent this minimock provides Finish() method that verifies that all your mocks have been called at least once during the test run. Testing concurrent code is tough. Fortunately minimock provides you with the helper method that makes testing concurrent code easy. Here is how it works: Minimock comman line args:
Package template implements data-driven templates for generating textual output. To generate HTML output, see package html/template, which has the same interface as this package but automatically secures HTML output against certain attacks. Templates are executed by applying them to a data structure. Annotations in the template refer to elements of the data structure (typically a field of a struct or a key in a map) to control execution and derive values to be displayed. Execution of the template walks the structure and sets the cursor, represented by a period '.' and called "dot", to the value at the current location in the structure as execution proceeds. The input text for a template is UTF-8-encoded text in any format. "Actions"--data evaluations or control structures--are delimited by "{{" and "}}"; all text outside actions is copied to the output unchanged. Actions may not span newlines, although comments can. Once parsed, a template may be executed safely in parallel. Here is a trivial example that prints "17 items are made of wool". More intricate examples appear below. Here is the list of actions. "Arguments" and "pipelines" are evaluations of data, defined in detail below. An argument is a simple value, denoted by one of the following. Arguments may evaluate to any type; if they are pointers the implementation automatically indirects to the base type when required. If an evaluation yields a function value, such as a function-valued field of a struct, the function is not invoked automatically, but it can be used as a truth value for an if action and the like. To invoke it, use the call function, defined below. A pipeline is a possibly chained sequence of "commands". A command is a simple value (argument) or a function or method call, possibly with multiple arguments: A pipeline may be "chained" by separating a sequence of commands with pipeline characters '|'. In a chained pipeline, the result of the each command is passed as the last argument of the following command. The output of the final command in the pipeline is the value of the pipeline. The output of a command will be either one value or two values, the second of which has type error. If that second value is present and evaluates to non-nil, execution terminates and the error is returned to the caller of Execute. A pipeline inside an action may initialize a variable to capture the result. The initialization has syntax where $variable is the name of the variable. An action that declares a variable produces no output. If a "range" action initializes a variable, the variable is set to the successive elements of the iteration. Also, a "range" may declare two variables, separated by a comma: in which case $index and $element are set to the successive values of the array/slice index or map key and element, respectively. Note that if there is only one variable, it is assigned the element; this is opposite to the convention in Go range clauses. A variable's scope extends to the "end" action of the control structure ("if", "with", or "range") in which it is declared, or to the end of the template if there is no such control structure. A template invocation does not inherit variables from the point of its invocation. When execution begins, $ is set to the data argument passed to Execute, that is, to the starting value of dot. Here are some example one-line templates demonstrating pipelines and variables. All produce the quoted word "output": During execution functions are found in two function maps: first in the template, then in the global function map. By default, no functions are defined in the template but the Funcs method can be used to add them. Predefined global functions are named as follows. The boolean functions take any zero value to be false and a non-zero value to be true. There is also a set of binary comparison operators defined as functions: For simpler multi-way equality tests, eq (only) accepts two or more arguments and compares the second and subsequent to the first, returning in effect (Unlike with || in Go, however, eq is a function call and all the arguments will be evaluated.) The comparison functions work on basic types only (or named basic types, such as "type Celsius float32"). They implement the Go rules for comparison of values, except that size and exact type are ignored, so any integer value, signed or unsigned, may be compared with any other integer value. (The arithmetic value is compared, not the bit pattern, so all negative integers are less than all unsigned integers.) However, as usual, one may not compare an int with a float32 and so on. Each template is named by a string specified when it is created. Also, each template is associated with zero or more other templates that it may invoke by name; such associations are transitive and form a name space of templates. A template may use a template invocation to instantiate another associated template; see the explanation of the "template" action above. The name must be that of a template associated with the template that contains the invocation. When parsing a template, another template may be defined and associated with the template being parsed. Template definitions must appear at the top level of the template, much like global variables in a Go program. The syntax of such definitions is to surround each template declaration with a "define" and "end" action. The define action names the template being created by providing a string constant. Here is a simple example: This defines two templates, T1 and T2, and a third T3 that invokes the other two when it is executed. Finally it invokes T3. If executed this template will produce the text By construction, a template may reside in only one association. If it's necessary to have a template addressable from multiple associations, the template definition must be parsed multiple times to create distinct *Template values, or must be copied with the Clone or AddParseTree method. Parse may be called multiple times to assemble the various associated templates; see the ParseFiles and ParseGlob functions and methods for simple ways to parse related templates stored in files. A template may be executed directly or through ExecuteTemplate, which executes an associated template identified by name. To invoke our example above, we might write, or to invoke a particular template explicitly by name,
Package goncurses is a new curses (ncurses) library for the Go programming language. It implements all the ncurses extension libraries: form, menu and panel. Minimal operation would consist of initializing the display: It is important to always call End() before your program exits. If you fail to do so, the terminal will not perform properly and will either need to be reset or restarted completely. CAUTION: Calls to ncurses functions are normally not atomic nor reentrant and therefore extreme care should be taken to ensure ncurses functions are not called concurrently. Specifically, never write data to the same window concurrently nor accept input and send output to the same window as both alter the underlying C data structures in a non safe manner. Ideally, you should structure your program to ensure all ncurses related calls happen in a single goroutine. This is probably most easily achieved via channels and Go's built-in select. Alternatively, or additionally, you can use a mutex to protect any calls in multiple goroutines from happening concurrently. Failure to do so will result in unpredictable and undefined behaviour in your program. The examples directory contains demonstrations of many of the capabilities goncurses can provide.
Package translate provides the API client, operations, and parameter types for Amazon Translate. Provides translation of the input content from the source language to the target language.
Package segment is a library for performing Unicode Text Segmentation as described in Unicode Standard Annex #29 http://www.unicode.org/reports/tr29/ Currently only segmentation at Word Boundaries is supported. The functionality is exposed in two ways: 1. You can use a bufio.Scanner with the SplitWords implementation of SplitFunc. The SplitWords function will identify the appropriate word boundaries in the input text and the Scanner will return tokens at the appropriate place. 2. Sometimes you would also like information returned about the type of token. To do this we have introduce a new type named Segmenter. It works just like Scanner but additionally a token type is returned.
Package ecs provides interfaces for the Entity Component System (ECS) paradigm used by engo.io/engo. It is predominately used by games, however will find use in other applications. The ECS paradigm aims to decouple distinct domains (e.g. rendering, input handling, AI) from one another, through a composition of independent components. The core concepts of ECS are described below. An entity is simply a set of components with a unique ID attached to it, nothing more. In particular, an entity has no logic attached to it and stores no data explicitly (except for the ID). Each entity corresponds to a specific entity within the game, such as a character, an item, or a spell. A component stores the raw data related to a specific aspect of an entity, nothing more. In particular, a component has no logic attached to it. Different aspects may include the position, animation graphics, or input actions of an entity. A system implements logic for processing entities possessing components of the same aspects as the system. For instance, an animation system may render entities possessing animation components.
Package recaptcha handles reCaptcha (http://www.google.com/recaptcha) form submissions This package is designed to be called from within an HTTP server or web framework which offers reCaptcha form inputs and requires them to be evaluated for correctness Edit the recaptchaPrivateKey constant before building and using
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C NOTE3: pipe is the default separator of or validation tags, if you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package stream provides filters that can be chained together in a manner similar to Unix pipelines. A simple example that prints all go files under the current directory: stream.Run is passed a list of filters that are chained together (stream.Find, stream.Grep, stream.WriteLines are filters). Each filter takes as input a sequence of strings and produces a sequence of strings. The empty sequence is passed as input to the first filter. The output of one filter is fed as input to the next filter. stream.Run is just one way to execute filters. Others are stream.Contents (returns the output of the last filter as a []string), and stream.ForEach (executes a supplied function for every output item). Filter execution can result in errors. These are returned from stream functions normally. For example, the following call will return a non-nil error. Each filter takes as input a sequence of strings (read from a channel) and produces as output a sequence of strings (written to a channel). The stream package provides a bunch of useful filters. Applications can define their own filters easily. For example, here is a filter that repeats every input n times: The output will be: Note that Repeat returns a FilterFunc, a function type that implements the Filter interface. This is a common implementation pattern: many simple filters can be expressed as a single function of type FilterFunc. FilterFunc is an appropriate type to use for most filters like Repeat above. However for some filters, dynamic customization is appropriate. Such filters provide their own implementation of the Filter interface with extra methods. For example, stream.Sort provides extra methods that can be used to control how items are sorted: The interface of this package is inspired by the http://labix.org/pipe package. Users may wish to consider that package in case it fits their needs better.
Package parsec provides a library of parser-combinators. The basic idea behind parsec module is that, it allows programmers to compose basic set of terminal parsers, a.k.a tokenizers and compose them together as a tree of parsers, using combinators like: And, OrdChoice, Kleene, Many, Maybe. To begin with there are four basic Types that needs to be kept in mind while creating and composing parsers, Scanner, an interface type that encapsulates the input text. A built in scanner called SimpleScanner is supplied along with this package. Developers can also implement their own scanner types. Following example create a new instance of SimpleScanner, using an input text: Nodify, callback function is supplied while combining parser functions. If the underlying parsing logic matches with i/p text, then callback will be dispatched with list of matching ParsecNode. Value returned by callback function will further be used as ParsecNode item in higher-level list of ParsecNodes. Parser, simple parsers are functions that matches i/p text for specific patterns. Simple parsers can be combined using one of the supplied combinators to construct a higher level parser. A parser function takes a Scanner object and applies the underlying parsing logic, if underlying logic succeeds Nodify callback is dispatched and a ParsecNode and a new Scanner object (with its cursor moved forward) is returned. If parser fails to match, it shall return the input scanner object as it is, along with nil ParsecNode. ParsecNode, an interface type encapsulates one or more tokens from i/p text, as terminal node or non-terminal node. If input text is going to be a single token like `10` or `true` or `"some string"`, then all we need is a single Parser function that can tokenize the i/p text into a terminal node. But our applications are seldom that simple. Almost all the time we need to parse the i/p text for more than one tokens and most of the time we need to compose them into a tree of terminal and non-terminal nodes. This is where combinators are useful. Package provides a set of combinators to help combine terminal parsers into higher level parsers. They are, All the above mentioned combinators accept one or more parser function as arguments, either by value or by reference. The reason for allowing parser argument by reference is to be able to define recursive parsing logic, like parsing nested arrays: Parsers for standard set of tokens are supplied along with this package. Most of these parsers return Terminal type as ParseNode. All of the terminal parsers, except End and NoEnd return Terminal type as ParsecNode. While End and NoEnd return a boolean type as ParsecNode. This is an experimental feature to use CSS like selectors for quering an Abstract Syntax Tree (AST). Types, APIs and methods associated with AST and Queryable are unstable, and are expected to change in future. While Scanner, Parser, ParsecNode types are re-used in AST and Queryable, combinator functions are re-implemented as AST methods. Similarly type ASTNodify is to be used instead of Nodify type. Otherwise all the parsec techniques mentioned above are equally applicable on AST. Additionally, following points are worth noting while using AST,
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field validation and even Cross Field Cross Struct validation for nested structs. Built In Validator A simple example usage: The error can be used like so Both StructValidationErrors and FieldValidationError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldValidationError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. This library is supported on GoLang 1.7 and higher. We have implemented Go Modules support and the CI pipeline shows it working with new installs, tested with Go 1.13. To keep 1.7 compatibility, we use `go mod vendor` to maintain the `vendor/` folder for older 1.7 and later runtimes. If either runtime has an issue, please create an Issue and I will address. For version 1.x, you are not restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields with iTunes specific fields all prefixed with the letter `I`. However, do note that the 2.x version currently in progress will break this extensibility and enforce API methods going forward. This is to ensure that the feed can both be marshalled, and unmarshalled back and forth (current 1.x branch can only be unmarshalled - hence the work for 2.x). `go-fuzz` has been added in 1.4.1, covering all exported API methods. They have been ran extensively and no issues have come out of them yet (most tests were ran overnight, over about 11 hours with zero crashes). If you wish to help fuzz the inputs, with Go 1.13 or later you can run `go-fuzz` on any of the inputs. To obtain a list of available funcs to pass, just run `go-fuzz` without any parameters: If you do find an issue, please raise an issue immediately and I will quickly address. The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. v1.4.2 v1.4.1 v1.4.0 v1.3.2 v1.3.1 v1.3.0 v1.2.1 v1.2.0 v1.1.0 v1.0.0 RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233
Package iplib provides enhanced tools for working with IP networks and addresses. These tools are built upon and extend the generic functionality found in the Go "net" package. The main library comes in two parts: a series of utilities for working with net.IP (sort, increment, decrement, delta, compare, convert to binary or hex- string, convert between net.IP and integer) and an enhancement of net.IPNet called iplib.Net that can calculate the first and last IPs of a block as well as enumerating the block into []net.IP, incrementing and decrementing within the boundaries of the block and creating sub- or super-nets of it. For most features iplib exposes a v4 and a v6 variant to handle each network properly, but in all cases there is a generic function that handles any IP and routes between them. One caveat to this is those functions that require or return an integer value representing the address, in these cases the IPv4 variants take an int32 as input while the IPv6 functions require a *big.Int in order to work with the 128bits of address. For managing the complexity of IPv6 address-spaces, this library adds a new mask, called a Hostmask, as an optional constraint on iplib.Net6 networks, please see the type-documentation for more information on using it. For functions where it is possible to exceed the address-space the rule is that underflows return the version-appropriate all-zeroes address while overflows return the all-ones. There are also two submodules under iplib: the iplib/iid module contains functions for generating RFC 7217-compliant IPv6 Interface ID addresses, and iplib/iana imports the IANA IP Special Registries and exposes functions for comparing IP addresses against those registries to determine if the IP is part of a special reservation (for example RFC 1918 private networks or the RFC 3849 documentation network).
Package duit is a pure go, cross-platform, MIT-licensed, UI toolkit for developers. The examples/ directory has small code examples for working with duit and its UIs. Examples are the recommended starting point. Start with NewDUI to create a DUI: essentially a window and all the UI state. The user interface consists of a hierarchy of "UIs" like Box, Scroll, Button, Label, etc. They are called UIs, after the interface UI they all implement. The zero structs for UIs have sane default behaviour so you only have to fill in the fields you need. UIs are kept/wrapped in a Kid, to track their layout/draw state. Use NewKids() to build up the UIs for your application. You won't see much of the Kid-types/functions otherwise, unless you implement a new UI. You are in charge of the main event loop, receiving mouse/keyboard/window events from the dui.Inputs channel, and typically passing them on unchanged to dui.Input. All callbacks and functions on UIs are called from inside dui.Input. From there you can also safely change the the UIs, no locking required. After changing a UI you are responsible for calling MarkLayout or MarkDraw to tell duit the UI needs a new layout or draw. This may sound like more work, but this tradeoff keeps the API small and easy to use. If you need to change the UI from a goroutine outside of the main loop, e.g. for blocking calls, you can send a function that makes those modifications on the dui.Call channel, which will be run on the main channel through dui.Inputs. After handling an input, duit will layout or draw as necessary, no need to render explicitly. Embedding a UI into your own data structure is often an easy way to build up UI hiearchies. Scroll and Edit show a scrollbar. Use button 1 on the scrollbar to scroll up, button 3 to scroll down. If you click more near the top, you scroll less. More near the bottom, more. Button 2 scrolls to the absolute place, where you clicked. Button 4 and 5 are wheel up and wheel down, and also scroll less/more depending on position in the UI.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). This default retry policy can be created using: You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Go SDK enable circuit breaker with default configuration, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configuratoins. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package edn implements encoding and decoding of EDN values as defined in https://github.com/edn-format/edn. For a full introduction on how to use go-edn, see https://github.com/go-edn/edn/blob/v1/docs/introduction.md. Fully self-contained examples of go-edn can be found at https://github.com/go-edn/edn/tree/v1/examples. Note that the small examples in this package is not checking errors as persively as you should do when you use this package. This is done because I'd like the examples to be easily readable and understandable. The bigger examples provide proper error handling. EDN, in contrast to JSON, supports arbitrary values as keys. This example shows how one can implement enums and sets, and how to support multiple different forms for a specific value type. The set implemented here supports the notation `:all` for all values. This example shows how to read and write basic EDN tags, and how this can be utilised: In contrast to encoding/json, you can read in data where you only know that the input satisfies some sort of interface, provided the value is tagged. This example shows how one can do streaming with the decoder, and how to properly know when the stream has no elements left.
Package lexruntimeservice provides the API client, operations, and parameter types for Amazon Lex Runtime Service. Amazon Lex provides both build and runtime endpoints. Each endpoint provides a set of operations (API). Your conversational bot uses the runtime API to understand user utterances (user input text or voice). For example, suppose a user says "I want pizza", your bot sends this input to Amazon Lex using the runtime API. Amazon Lex recognizes that the user request is for the OrderPizza intent (one of the intents defined in the bot). Then Amazon Lex engages in user conversation on behalf of the bot to elicit required information (slot values, such as pizza size and crust type), and then performs fulfillment activity (that you configured when you created the bot). You use the build-time API to create and manage your Amazon Lex bot. For a list of build-time operations, see the build-time API, .
Command gencodec generates marshaling methods for struct types. When gencodec is invoked on a directory and type name, it creates a Go source file containing JSON, YAML and TOML marshaling methods for the type. The generated methods add features which the standard json package cannot offer. The gencodec:"required" tag can be used to generate a presence check for the field. The generated unmarshaling method returns an error if a required field is missing. Other struct tags are carried over as is. The "json", "yaml", "toml" tags can be used to rename a field when marshaling. Example: An invocation of gencodec can specify an additional 'field override' struct from which marshaling type replacements are taken. If the override struct contains a field whose name matches the original type, the generated marshaling methods will use the overridden type and convert to and from the original field type. If the override struct contains a field F of type T, which does not exist in the original type, and the original type has a method named F with no arguments and return type assignable to T, the method is called by Marshal*. If there is a matching method F but the return type or arguments are unsuitable, an error is raised. In this example, the specialString type implements json.Unmarshaler to enforce additional parsing rules. When json.Unmarshal is used with type foo, the specialString unmarshaler will be used to parse the value of SpecialField. The result of foo.Func() is added to the result on marshaling under the key `id`. If the input on unmarshal contains a key `id` this field is ignored. Field types in the override struct must be trivially convertible to the original field type. gencodec's definition of 'convertible' is less restrictive than the usual rules defined in the Go language specification. The following conversions are supported: If the fields are directly assignable, no conversion is emitted. If the fields are convertible according to Go language rules, a simple conversion is emitted. Example input code: The generated code will contain: If the fields are of map or slice type and the element (and key) types are convertible, a simple loop is emitted. Example input code: The generated code is similar to this snippet: Conversions between slices and arrays are supported. Example input code: The generated code is similar to this snippet:
Package jump implements the "jump consistent hash" algorithm. Example Reference C++ implementation[1] Jump consistent hash works by computing when its output changes as the number of buckets increases. Let ch(key, num_buckets) be the consistent hash for the key when there are num_buckets buckets. Clearly, for any key, k, ch(k, 1) is 0, since there is only the one bucket. In order for the consistent hash function to balanced, ch(k, 2) will have to stay at 0 for half the keys, k, while it will have to jump to 1 for the other half. In general, ch(k, n+1) has to stay the same as ch(k, n) for n/(n+1) of the keys, and jump to n for the other 1/(n+1) of the keys. Here are examples of the consistent hash values for three keys, k1, k2, and k3, as num_buckets goes up: A linear time algorithm can be defined by using the formula for the probability of ch(key, j) jumping when j increases. It essentially walks across a row of this table. Given a key and number of buckets, the algorithm considers each successive bucket, j, from 1 to num_buckets1, and uses ch(key, j) to compute ch(key, j+1). At each bucket, j, it decides whether to keep ch(k, j+1) the same as ch(k, j), or to jump its value to j. In order to jump for the right fraction of keys, it uses a pseudorandom number generator with the key as its seed. To jump for 1/(j+1) of keys, it generates a uniform random number between 0.0 and 1.0, and jumps if the value is less than 1/(j+1). At the end of the loop, it has computed ch(k, num_buckets), which is the desired answer. In code: We can convert this to a logarithmic time algorithm by exploiting that ch(key, j+1) is usually unchanged as j increases, only jumping occasionally. The algorithm will only compute the destinations of jumps the j’s for which ch(key, j+1) ≠ ch(key, j). Also notice that for these j’s, ch(key, j+1) = j. To develop the algorithm, we will treat ch(key, j) as a random variable, so that we can use the notation for random variables to analyze the fractions of keys for which various propositions are true. That will lead us to a closed form expression for a pseudorandom variable whose value gives the destination of the next jump. Suppose that the algorithm is tracking the bucket numbers of the jumps for a particular key, k. And suppose that b was the destination of the last jump, that is, ch(k, b) ≠ ch(k, b+1), and ch(k, b+1) = b. Now, we want to find the next jump, the smallest j such that ch(k, j+1) ≠ ch(k, b+1), or equivalently, the largest j such that ch(k, j) = ch(k, b+1). We will make a pseudorandom variable whose value is that j. To get a probabilistic constraint on j, note that for any bucket number, i, we have j ≥ i if and only if the consistent hash hasn’t changed by i, that is, if and only if ch(k, i) = ch(k, b+1). Hence, the distribution of j must satisfy Fortunately, it is easy to compute that probability. Notice that since P( ch(k, 10) = ch(k, 11) ) is 10/11, and P( ch(k, 11) = ch(k, 12) ) is 11/12, then P( ch(k, 10) = ch(k, 12) ) is 10/11 * 11/12 = 10/12. In general, if n ≥ m, P( ch(k, n) = ch(k, m) ) = m / n. Thus for any i > b, Now, we generate a pseudorandom variable, r, (depending on k and j) that is uniformly distributed between 0 and 1. Since we want P(j ≥ i) = (b+1) / i, we set P(j ≥ i) iff r ≤ (b+1) / i. Solving the inequality for i yields P(j ≥ i) iff i ≤ (b+1) / r. Since i is a lower bound on j, j will equal the largest i for which P(j ≥ i), thus the largest i satisfying i ≤ (b+1) / r. Thus, by the definition of the floor function, j = floor((b+1) / r). Using this formula, jump consistent hash finds ch(key, num_buckets) by choosing successive jump destinations until it finds a position at or past num_buckets. It then knows that the previous jump destination is the answer. To turn this into the actual code of figure 1, we need to implement random. We want it to be fast, and yet to also to have well distributed successive values. We use a 64bit linear congruential generator; the particular multiplier we use produces random numbers that are especially well distributed in higher dimensions (i.e., when successive random values are used to form tuples). We use the key as the seed. (For keys that don’t fit into 64 bits, a 64 bit hash of the key should be used.) The congruential generator updates the seed on each iteration, and the code derives a double from the current seed. Tests show that this generator has good speed and distribution. It is worth noting that unlike the algorithm of Karger et al., jump consistent hash does not require the key to be hashed if it is already an integer. This is because jump consistent hash has an embedded pseudorandom number generator that essentially rehashes the key on every iteration. The hash is not especially good (i.e., linear congruential), but since it is applied repeatedly, additional hashing of the input key is not necessary. [1] http://arxiv.org/pdf/1406.2294v1.pdf
Package chime provides the API client, operations, and parameter types for Amazon Chime. recommend using the latest versions in the Amazon Chime SDK API reference, in the Amazon Chime SDK. Using the latest versions requires migrating to dedicated namespaces. For more information, refer to Migrating from the Amazon Chime namespacein the Amazon Chime SDK Developer Guide. The Amazon Chime application programming interface (API) is designed so administrators can perform key tasks, such as creating and managing Amazon Chime accounts, users, and Voice Connectors. This guide provides detailed information about the Amazon Chime API, including operations, types, inputs and outputs, and error codes. You can use an AWS SDK, the AWS Command Line Interface (AWS CLI), or the REST API to make API calls for Amazon Chime. We recommend using an AWS SDK or the AWS CLI. The page for each API action contains a See Also section that includes links to information about using the action with a language-specific AWS SDK or the AWS CLI. Using an AWS SDK You don't need to write code to calculate a signature for request authentication. The SDK clients authenticate your requests by using access keys that you provide. For more information about AWS SDKs, see the AWS Developer Center. Using the AWS CLI Use your access keys with the AWS CLI to make API calls. For information about setting up the AWS CLI, see Installing the AWS Command Line Interfacein the AWS Command Line Interface User Guide. For a list of available Amazon Chime commands, see the Amazon Chime commandsin the AWS CLI Command Reference. Using REST APIs If you use REST to make API calls, you must authenticate your request by providing a signature. Amazon Chime supports Signature Version 4. For more information, see Signature Version 4 Signing Processin the Amazon Web Services General Reference. When making REST API calls, use the service name chime and REST endpoint https://service.chime.aws.amazon.com . Administrative permissions are controlled using AWS Identity and Access Management (IAM). For more information, see Identity and Access Management for Amazon Chimein the Amazon Chime Administration Guide.
Flatten makes flat, one-dimensional maps from arbitrarily nested ones. Map keys turn into compound names, like `a.b.1.c` (dotted style) or `a[b][1][c]` (Rails style). It takes input as either JSON strings or Go structures. It (only) knows how to traverse JSON types: maps, slices and scalars. Or Go maps directly.
Package apex provides Lambda support for Go via a Node.js shim and this package for operating over stdio. Example of a Lambda function handling arbitrary JSON input.
Package goreadme generates readme markdown file from go doc. The package can be used as a command line tool and as Github action, described below: Github actions can be configured to update the README file automatically every time it is needed. Below there is an example that on every time a new change is pushed to the main branch, the action is trigerred, generates a new README file, and if there is a change - commits and pushes it to the main branch. In pull requests that affect the README content, if the `GITHUB_TOKEN` is given, the action will post a comment on the pull request with changes that will be made to the README file. To use this with Github actions, add the following content to `.github/workflows/goreadme.yml`. See ./action.yml for all available input options. Use as a command line tool Both Go doc and readme files are important. Go doc to be used by your user's library, and README file to welcome users to use your library. They share common content, which is usually duplicated from the doc to the readme or vice versa once the library is ready. The problem is that keeping documentation updated is important, and hard enough - keeping both updated is twice as hard. The formatting of the README.md is done by the go doc parser. This makes the result README.md a bit more limited. Currently, `goreadme` supports the formatting as explained in (godoc page) https://blog.golang.org/godoc-documenting-go-code, or (here) https://pkg.go.dev/github.com/fluhus/godoc-tricks. Meaning: * A header is a single line that is separated from a paragraph above. * Code block is recognized by indentation as Go code. * Inline code is marked with `backticks`. * URLs will just automatically be converted to links: https://github.com/posener/goreadme Additionally, the syntax was extended to include some more markdown features while keeping the Go doc readable: * Bulleted and numbered lists are possible when each bullet item is followed by an empty line. * Diff blocks are automatically detected when each line in a code block starts with a `' '`, `'-'` or `'+'`: * A repository file can be linked when providing a path that start with `./`: ./goreadme.go. * A link can have a link text by prefixing it with parenthesised text: (goreadme page) https://github.com/posener/goreadme. * A link to repository file and can have a link text: (goreadme main file) ./goreamde.go. * An image can be added by prefixing a link to an image with `(image/<image title>)`: (image/title of image) https://github.githubassets.com/images/icons/emoji/unicode/1f44c.png The goreadme tests the test cases in the ./testdata directory. It generates readme files for all the packages in that directory and asserts that the result readme matches the existing one. When modifying goreadme behavior, there is no need to manually change these readme files. It is possible to run `WRITE_READMES=1 go test ./...` which regenerates them and check the changes match the expected (optionally using `git diff`).
Package goconfig uses a struct as input and populates the fields of this struct with parameters fom command line, environment variables and configuration file.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). This default retry policy can be created using: You can set this retry policy for a single request: or for all requests made by a client: Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Stop() function is called or when Ctrl-C is pressed. If we have a primitive which consumes key presses, we call the application's SetFocus() function to redirect all key presses to that primitive. Most primitives then offer ways to install handlers that allow you to react to any actions performed on them. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, colors are specified using the tcell.Color type. Functions such as tcell.GetColor(), tcell.NewHexColor(), and tcell.NewRGBColor() can be used to create colors from W3C color names or RGB values. Almost all strings which are displayed can contain color tags. Color tags are W3C color names or six hexadecimal digits following a hash tag, wrapped in square brackets. Examples: A color tag changes the color of the characters following that color tag. This applies to almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. Color tags may contain not just the foreground (text) color but also the background color and additional flags. In fact, the full definition of a color tag is as follows: Each of the three fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered color tags.) Colors that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags (some flags may not be supported by your terminal): Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. This package supports unicode characters including wide characters. Many functions in this package are not thread-safe. For many applications, this may not be an issue: If your code makes changes in response to key events, it will execute in the main goroutine and thus will not cause any race conditions. If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate() or Application.QueueUpdateDraw() (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw() from any goroutine without having to wrap it in QueueUpdate(). And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use QueueUpdate() as that may lead to deadlocks. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. All widgets also implement the Primitive interface. The tview package is based on https://github.com/derailed/tcell. It uses types and constants from that package (e.g. colors and keyboard values). This package does not process mouse input (yet).
Package polyline implements a Google Maps Encoding Polyline encoder and decoder. See https://developers.google.com/maps/documentation/utilities/polylinealgorithm. The default codec encodes and decodes two-dimensional coordinates scaled by 1e5. For other dimensionalities and scales create a custom Codec. The package operates on byte slices. Encoding functions take an existing byte slice as input (which can be nil) and return a new byte slice with the encoded value appended to it, similarly to how Go's append function works. To increase performance, you can pre-allocate byte slices, for example by passing make([]byte, 0, 128) as the input byte slice. Similarly, decoding functions take a byte slice as input and return the remaining unconsumed bytes as output.
Package getlang provides fast natural language detection for various languages getlang compares input text to a characteristic profile of each supported language and returns the language that best matches the input text
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
The backend package defines the very soul of Lime. Some highlevel concepts follow. Lime is designed with the goal of having a clear frontend and backend separation to allow and hopefully simplify the creation of multiple frontend versions. The two most active frontends are at the time of writing this one for terminals based on termbox-go and a GUI application based on Qt's QML scripting language. There's also a Proof of Concept html frontend. The Editor singleton represents the most fundamental interface that frontends use to access the backend. It keeps a list of editor windows, handles input, detects file changes as well as communicate back to the frontend as needed. At any time there can be multiple windows of the editor open. Each Window can have a different layout, settings and status. The View class defines a "view" into a specific backing buffer. Multiple views can share the same backing buffer. For instance viewing the same buffer in split view, or viewing the buffer with one syntax definition in one view and another syntax definition in the other. It is very closely related to the view defined in the Model-view-controller paradigm, and contains settings pertaining to exactly how a buffer is shown to the user. The command interface defines actions to be executed either for the whole application, a specific window or a specific view. Key bindings define a sequence of key-presses, a Command and the command's arguments to be executed upon that sequence having been pressed. Key bindings can optionally have multiple contexts associated with it which allows the exact same key sequence to have different meaning depending on context. See http://godoc.org/github.com/limetext/backend#QueryContextCallback for details. Many of the components have their own key-value Settings object associated with it, but settings are also nested. In other words, if the settings key does not exist in the current object's settings, its parent's settings object is queried next which in turn will query its parent if its settings object didn't contain the key neither.
Tool for Golang to sort goimports by 3-4 groups: std, general, local(which is optional) and project dependencies. It will help you to keep your code cleaner. Example: Input: Output: If you need to set package names explicitly(in import declaration), you can use additional option `-set-alias`. More:
Package protocompile provides the entry point for a high performance native Go protobuf compiler. "Compile" in this case just means parsing and validating source and generating fully-linked descriptors in the end. Unlike the protoc command-line tool, this package does not try to use the descriptors to perform code generation. The various sub-packages represent the various compile phases and contain models for the intermediate results. Those phases follow: This package provides an easy-to-use interface that does all the relevant phases, based on the inputs given. If an input is provided as source, all phases apply. If an input is provided as a descriptor proto, only phases 3 to 5 apply. Nothing is necessary if provided a linked descriptor (which is usually only the case for select system dependencies). This package is also capable of taking advantage of multiple CPU cores, so a compilation involving thousands of files can be done very quickly by compiling things in parallel. A Resolver is how the compiler locates artifacts that are inputs to the compilation. For example, it can load protobuf source code that must be processed. A Resolver could also supply some already-compiled dependencies as fully-linked descriptors, alleviating the need to re-compile them. A Resolver can provide any of the following in response to a query for an input. Compilation will use the Resolver to load the files that are to be compiled and also to load all dependencies (i.e. other files imported by those being compiled). A Compiler accepts a list of file names and produces the list of descriptors. A Compiler has several fields that control how it works but only the Resolver field is required. A minimal Compiler, that resolves files by loading them from the file system based on the current working directory, can be had with the following simple snippet: This minimal Compiler will use default parallelism, equal to the number of CPU cores detected; it will not generate source code info in the resulting descriptors; and it will fail fast at the first sign of any error. All of these aspects can be customized by setting other fields.