Package date provides functionality for working with dates. It implements a light-weight Date type that is storage-efficient and convenient for calendrical calculations and date parsing and formatting (including years outside the [0,9999] interval). Subpackages provide: * `clock.Clock` which expresses a wall-clock style hours-minutes-seconds with millisecond precision. * `period.Period` which expresses a period corresponding to the ISO-8601 form (e.g. "PT30S"). * `timespan.DateRange` which expresses a period between two dates. * `timespan.TimeSpan` which expresses a duration of time between two instants. * `view.VDate` which wraps `Date` for use in templates etc. This package follows very closely the design of package time (http://golang.org/pkg/time/) in the standard library, many of the Date methods are implemented using the corresponding methods of the time.Time type, and much of the documentation is copied directly from that package. https://golang.org/src/time/time.go https://en.wikipedia.org/wiki/Gregorian_calendar https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar https://en.wikipedia.org/wiki/Astronomical_year_numbering https://en.wikipedia.org/wiki/ISO_8601 https://tools.ietf.org/html/rfc822 https://tools.ietf.org/html/rfc850 https://tools.ietf.org/html/rfc1123 https://tools.ietf.org/html/rfc3339
Package repr attempts to represent Go values in a form that can be copy-and-pasted into source code directly. Some values (such as pointers to basic types) can not be represented directly in Go. These values will be output as `&<value>`. eg. `&23`
Package fernet takes a user-provided message (an arbitrary sequence of bytes), a key (256 bits), and the current time, and produces a token, which contains the message in a form that can't be read or altered without the key. For more information and background, see the Fernet spec at https://github.com/fernet/spec. Subdirectories in this package provide command-line tools for working with Fernet keys and tokens.
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set Cookie: You can add custom cookie via SetCookie func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The primary focus is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents. The package makes use of the Gonum (http://http//www.gonum.org/) library for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn (http://scikit-learn.org/stable/) and Gensim(https://radimrehurek.com/gensim/) The primary intended use case is to support document input as text strings encoded as a matrix of numerical feature vectors called a `term document matrix`. Each column in the matrix corresponds to a document in the corpus and each row corresponds to a unique term occurring in the corpus. The individual elements within the matrix contain the frequency with which each term occurs within each document (referred to as `term frequency`). Whilst textual data from document corpora are the primary intended use case, the algorithms can be used with other types of data from other sources once encoded (vectorised) into a suitable matrix e.g. image data, sound data, users/products, etc. These matrices can be processed and manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. Typically the algorithms in this package implement one of three primary interfaces: One of the implementations of Vectoriser is Pipeline which can be used to wire together pipelines composed of a Vectoriser and one or more Transformers arranged in serial so that the output from each stage forms the input of the next. This can be used to construct a classic LSI (Latent Semantic Indexing) pipeline (vectoriser -> TF.IDF weighting -> Truncated SVD): Whilst they take different inputs, both Vectorisers and Transformers have 3 primary methods:
Package fig loads configuration files and/or environment variables into Go structs with extra juice for validating fields and setting defaults. Config files may be defined in yaml, json or toml format. When you call `Load()`, fig takes the following steps: Define your configuration file in the root of your project: Define your struct and load it: Pass options as additional parameters to `Load()` to configure fig's behaviour. Do not look for any configuration file with `IgnoreFile()`. If IgnoreFile is given then any other configuration file related options like `File` and `Dirs` are simply ignored. File & Dirs By default fig searches for a file named `config.yaml` in the directory it is run from. Change the file and directories fig searches in with `File()` and `Dirs()`. Fig searches for the file in dirs sequentially and uses the first matching file. The decoder (yaml/json/toml) used is picked based on the file's extension. The struct tag key tag fig looks for to find the field's alt name can be changed using `Tag()`. By default fig uses the tag key `fig`. Fig can be configured to additionally set fields using the environment. This behaviour can be enabled using the option `UseEnv(prefix)`. If loading from file is also enabled then first the struct is loaded from a config file and thus any values found in the environment will overwrite existing values in the struct. Prefix is a string that will be prepended to the keys that are searched in the environment. Although discouraged, prefix may be left empty. Fig searches for keys in the form PREFIX_FIELD_PATH, or if prefix is left empty then FIELD_PATH. A field's path is formed by prepending its name with the names of all the surrounding structs up to the root struct, upper-cased and separated by an underscore. If a field has an alt name defined in its struct tag then that name is preferred over its struct name. With the struct above and `UseEnv("myapp")` fig would search for the following environment variables: Fields contained in struct slices whose elements already exists can be also be set via the environment in the form PARENT_IDX_FIELD, where idx is the index of the field in the slice. With the config above individual servers may be configured with the following environment variable: Note: the Server slice must already have members inside it (i.e. from loading of the configuration file) for the containing fields to be altered via the environment. Fig will not instantiate and insert elements into the slice. Maps and map values cannot be populated from the environment. Change the layout fig uses to parse times using `TimeLayout()`. By default fig parses time using the `RFC.3339` layout (`2006-01-02T15:04:05Z07:00`). By default fig ignores any fields in the config file that are not present in the struct. This behaviour can be changed using `UseStrict()` to achieve strict parsing. When strict parsing is enabled, extra fields in the config file will cause an error. A validate key with a required value in the field's struct tag makes fig check if the field has been set after it's been loaded. Required fields that are not set are returned as an error. Fig uses the following properties to check if a field is set: See example below to help understand: A default key in the field tag makes fig fill the field with the value specified when the field is not otherwise set. Fig attempts to parse the value based on the field's type. If parsing fails then an error is returned. A default value can be set for the following types: Successive elements of slice defaults should be separated by a comma. The entire slice can optionally be enclosed in square brackets: Boolean values: Fig cannot distinguish between false and an unset value for boolean types. As a result, default values for booleans are not currently supported. Maps: Maps are not supported because providing a map in a string form would be complex and error-prone. Users are encouraged to use structs instead for more reliable and structured data handling. Map values: Values retrieved from a map through reflection are not addressable. Therefore, setting default values for map values is not currently supported. The required validation and the default field tags are mutually exclusive as they are contradictory. This is not allowed: A wrapped error `ErrFileNotFound` is returned when fig is not able to find a config file to load. This can be useful for instance to fallback to a different configuration loading mechanism.
Package formam decodes HTTP form and query parameters.
Package vestigo implements a performant, stand-alone, HTTP compliant URL Router for go web applications. Vestigo utilizes a simple radix trie for url route indexing and search, and puts any URL parameters found in a request in the request's Form, much like PAT. Vestigo boasts standards compliance regarding the proper behavior when methods are not allowed on a given resource as well as when a resource isn't found. vestigo also includes built in CORS support on a global and per resource capability.
Package gomarkdoc formats documentation for one or more packages as markdown for usage outside of the main https://pkg.go.dev site. It supports custom templates for tweaking representation of documentation at fine-grained levels, exporting both exported and unexported symbols, and custom formatters for different backends. If you want to use this package as a command-line tool, you can install the command by running the following on go 1.16+: For older versions of go, you can install using the following method instead: The command line tool supports configuration for all of the features of the importable package: The gomarkdoc command processes each of the provided packages, generating documentation for the package in markdown format and writing it to console. For example, if you have a package in your current directory and want to send it to a documentation markdown file, you might do something like this: The gomarkdoc tool supports generating documentation for both local packages and remote ones. To specify a local package, start the name of the package with a period (.) or specify an absolute path on the filesystem. All other package signifiers are assumed to be remote packages. You may specify both local and remote packages in the same command invocation as separate arguments. If you have a project with many packages but you want to skip documentation generation for some, you can use the --exclude-dirs option. This will remove any matching directories from the list of directories to process. Excluded directories are specified using the same pathing syntax as the packages to process. Multiple expressions may be comma-separated or specified by using the --exclude-dirs flag multiple times. For example, in this repository we generate documentation for the entire project while excluding our test packages by running: By default, the documentation generated by the gomarkdoc command is sent to standard output, where it can be redirected to a file. This can be useful if you want to perform additional modifications to the documentation or send it somewhere other than a file. However, keep in mind that there are some inconsistencies in how various shells/platforms handle redirected command output (for example, Powershell encodes in UTF-16, not UTF-8). As a result, the --output option described below is recommended for most use cases. If you want to redirect output for each processed package to a file, you can provide the --output/-o option, which accepts a template specifying how to generate the path of the output file. A common usage of this option is when generating README documentation for a package with subpackages (which are supported via the ... signifier as in other parts of the golang toolchain). In addition, this option provides consistent behavior across platforms and shells: You can see all of the data available to the output template in the PackageSpec struct in the github.com/princjef/gomarkdoc/cmd/gomarkdoc package. The documentation information that is output is formatted using a series of text templates for the various components of the overall documentation which get generated. Higher level templates contain lower level templates, but any template may be replaced with an override template using the --template/-t option. The full list of templates that may be overridden are: file: generates documentation for a file containing one or more packages, depending on how the tool is configured. This is the root template for documentation generation. package: generates documentation for an entire package. type: generates documentation for a single type declaration, as well as any related functions/methods. func: generates documentation for a single function or method. It may be referenced from within a type, or directly in the package, depending on nesting. value: generates documentation for a single variable or constant declaration block within a package. index: generates an index of symbols within a package, similar to what is seen for godoc.org. The index links to types, funcs, variables, and constants generated by other templates, so it may need to be overridden as well if any of those templates are changed in a material way. example: generates documentation for a single example for a package or one of its symbols. The example is generated alongside whichever symbol it represents, based on the standard naming conventions outlined in https://blog.golang.org/examples#TOC_4. doc: generates the freeform documentation block for any of the above structures that can contain a documentation section. import: generates the import code used to pull in a package. Overriding with the --template-file option uses a key-value pair mapping a template name to the file containing the contents of the override template to use. Specified template files must exist: As with the godoc tool itself, only exported symbols will be shown in documentation. This can be expanded to include all symbols in a package by adding the --include-unexported/-u flag. If you want to blend the documentation generated by gomarkdoc with your own hand-written markdown, you can use the --embed/-e flag to change the gomarkdoc tool into an append/embed mode. When documentation is generated, gomarkdoc looks for a file in the location where the documentation is to be written and embeds the documentation if present. Otherwise, the documentation is appended to the end of the file. When running with embed mode enabled, gomarkdoc will look for either this single comment: Or the following pair of comments (in which case all content in between is replaced): If you would like to include files that are part of a build tag, you can specify build tags with the --tags flag. Tags are also supported through GOFLAGS, though command line and configuration file definitions override tags specified through GOFLAGS. You can also run gomarkdoc in a verification mode with the --check/-c flag. This is particularly useful for continuous integration when you want to make sure that a commit correctly updated the generated documentation. This flag is only supported when the --output/-o flag is specified, as the file provided there is what the tool is checking: If you're experiencing difficulty with gomarkdoc or just want to get more information about how it's executing underneath, you can add -v to show more logs. This can be chained a second time to show even more verbose logs: Some features of gomarkdoc rely on being able to detect information from the git repository containing the project. Since individual local git repositories may be configured differently from person to person, you may want to manually specify the information for the repository to remove any inconsistencies. This can be achieved with the --repository.url, --repository.default-branch and --repository.path options. For example, this repository would be configured with: If you want to reuse configuration options across multiple invocations, you can specify a file in the folder where you invoke gomarkdoc containing configuration information that you would otherwise provide on the command line. This file may be a JSON, TOML, YAML, HCL, env, or Java properties file, but the name is expected to start with .gomarkdoc (e.g. .gomarkdoc.yml). All configuration options are available with the camel-cased form of their long name (e.g. --include-unexported becomes includeUnexported). Template overrides are specified as a map, rather than a set of key-value pairs separated by =. Options provided on the command line override those provided in the configuration file if an option is present in both. While most users will find the command line utility sufficient for their needs, this package may also be used programmatically by installing it directly, rather than its command subpackage. The programmatic usage provides more flexibility when selecting what packages to work with and what components to generate documentation for. A common usage will look something like this: This project uses itself to generate the README files in github.com/princjef/gomarkdoc and its subdirectories. To see the commands that are run to generate documentation for this repository, take a look at the Doc() and DocVerify() functions in magefile.go and the .gomarkdoc.yml file in the root of this repository. To run these commands in your own project, simply replace `go run ./cmd/gomarkdoc` with `gomarkdoc`. Know of another project that is using gomarkdoc? Open an issue with a description of the project and link to the repository and it might be featured here!
Package forms This package provides form creation and rendering functionalities, as well as FieldSet definition. Two kind of forms can be created: base forms and Bootstrap3 compatible forms; even though the latters are automatically provided the required classes to make them render correctly in a Bootstrap environment, every form can be given custom parameters such as classes, id, generic parameters (in key-value form) and stylesheet options. Copyright 2016-present Wenhui Shen <www.webx.top> Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package crc32 implements the 32-bit cyclic redundancy check, or CRC-32, checksum. See http://en.wikipedia.org/wiki/Cyclic_redundancy_check for information. Polynomials are represented in LSB-first form also known as reversed representation. See http://en.wikipedia.org/wiki/Mathematics_of_cyclic_redundancy_checks#Reversed_representations_and_reciprocal_polynomials for information.
Package conf provides support for using environmental variables and command line arguments for configuration. It is compatible with the GNU extensions to the POSIX recommendations for command-line options. See http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html There are no hard bindings for this package. This package takes a struct value and parses it for both the environment and flags. It supports several tags to customize the flag options. The field name and any parent struct name will be used for the long form of the command name unless the name is overridden. As an example, using this config struct: The following usage information would be output you can display. Usage: conf.test [options] [arguments] OPTIONS There is an API called Parse that can process a config struct with environment variable and command line flag overrides. There is also YAML support using the yaml package that is part of this module. There is a WithParse function that takes a slice of bytes containing the YAML or WithParseReader that takes any concrete value that knows how to Read. Additionally, if the config struct has a field of the slice type conf.Args then it will be populated with any remaining arguments from the command line after flags have been processed. For example a program with a config struct like this: If that program is executed from the command line like this: Then the cfg.Args field will contain the string values ["serve", "http"]. The Args type has a method Num for convenient access to these arguments such as this: You can add a version with a description by adding the Version type to your config type and set these values at run time for display.
Package goracle is a database/sql/driver for Oracle DB. The connection string for the sql.Open("goracle", connString) call can be the simple type (with sid being the sexp returned by tnsping), or in the form of These are the defaults. Many advocate that a static session pool (min=max, incr=0) is better, with 1-10 sessions per CPU thread. See http://docs.oracle.com/cd/E82638_01/JJUCP/optimizing-real-world-performance.htm#JJUCP-GUID-BC09F045-5D80-4AF5-93F5-FEF0531E0E1D You may also use ConnectionParams to configure a connection. If you specify connectionClass, that'll reuse the same session pool without the connectionClass, but will specify it on each session acquire. Thus you can cluster the session pool with classes, or use POOLED for DRCP. For what can be used as "sid", see https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/configuring-naming-methods.html#GUID-E5358DEA-D619-4B7B-A799-3D2F802500F1
Package template implements data-driven templates for generating textual output. To generate HTML output, see package html/template, which has the same interface as this package but automatically secures HTML output against certain attacks. Templates are executed by applying them to a data structure. Annotations in the template refer to elements of the data structure (typically a field of a struct or a key in a map) to control execution and derive values to be displayed. Execution of the template walks the structure and sets the cursor, represented by a period '.' and called "dot", to the value at the current location in the structure as execution proceeds. The input text for a template is UTF-8-encoded text in any format. "Actions"--data evaluations or control structures--are delimited by "{{" and "}}"; all text outside actions is copied to the output unchanged. Actions may not span newlines, although comments can. Once parsed, a template may be executed safely in parallel. Here is a trivial example that prints "17 items are made of wool". More intricate examples appear below. Here is the list of actions. "Arguments" and "pipelines" are evaluations of data, defined in detail below. An argument is a simple value, denoted by one of the following. Arguments may evaluate to any type; if they are pointers the implementation automatically indirects to the base type when required. If an evaluation yields a function value, such as a function-valued field of a struct, the function is not invoked automatically, but it can be used as a truth value for an if action and the like. To invoke it, use the call function, defined below. A pipeline is a possibly chained sequence of "commands". A command is a simple value (argument) or a function or method call, possibly with multiple arguments: A pipeline may be "chained" by separating a sequence of commands with pipeline characters '|'. In a chained pipeline, the result of the each command is passed as the last argument of the following command. The output of the final command in the pipeline is the value of the pipeline. The output of a command will be either one value or two values, the second of which has type error. If that second value is present and evaluates to non-nil, execution terminates and the error is returned to the caller of Execute. A pipeline inside an action may initialize a variable to capture the result. The initialization has syntax where $variable is the name of the variable. An action that declares a variable produces no output. If a "range" action initializes a variable, the variable is set to the successive elements of the iteration. Also, a "range" may declare two variables, separated by a comma: in which case $index and $element are set to the successive values of the array/slice index or map key and element, respectively. Note that if there is only one variable, it is assigned the element; this is opposite to the convention in Go range clauses. A variable's scope extends to the "end" action of the control structure ("if", "with", or "range") in which it is declared, or to the end of the template if there is no such control structure. A template invocation does not inherit variables from the point of its invocation. When execution begins, $ is set to the data argument passed to Execute, that is, to the starting value of dot. Here are some example one-line templates demonstrating pipelines and variables. All produce the quoted word "output": During execution functions are found in two function maps: first in the template, then in the global function map. By default, no functions are defined in the template but the Funcs method can be used to add them. Predefined global functions are named as follows. The boolean functions take any zero value to be false and a non-zero value to be true. There is also a set of binary comparison operators defined as functions: For simpler multi-way equality tests, eq (only) accepts two or more arguments and compares the second and subsequent to the first, returning in effect (Unlike with || in Go, however, eq is a function call and all the arguments will be evaluated.) The comparison functions work on basic types only (or named basic types, such as "type Celsius float32"). They implement the Go rules for comparison of values, except that size and exact type are ignored, so any integer value, signed or unsigned, may be compared with any other integer value. (The arithmetic value is compared, not the bit pattern, so all negative integers are less than all unsigned integers.) However, as usual, one may not compare an int with a float32 and so on. Each template is named by a string specified when it is created. Also, each template is associated with zero or more other templates that it may invoke by name; such associations are transitive and form a name space of templates. A template may use a template invocation to instantiate another associated template; see the explanation of the "template" action above. The name must be that of a template associated with the template that contains the invocation. When parsing a template, another template may be defined and associated with the template being parsed. Template definitions must appear at the top level of the template, much like global variables in a Go program. The syntax of such definitions is to surround each template declaration with a "define" and "end" action. The define action names the template being created by providing a string constant. Here is a simple example: This defines two templates, T1 and T2, and a third T3 that invokes the other two when it is executed. Finally it invokes T3. If executed this template will produce the text By construction, a template may reside in only one association. If it's necessary to have a template addressable from multiple associations, the template definition must be parsed multiple times to create distinct *Template values, or must be copied with the Clone or AddParseTree method. Parse may be called multiple times to assemble the various associated templates; see the ParseFiles and ParseGlob functions and methods for simple ways to parse related templates stored in files. A template may be executed directly or through ExecuteTemplate, which executes an associated template identified by name. To invoke our example above, we might write, or to invoke a particular template explicitly by name,
Package fpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates go-pdf/fpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. go-pdf/fpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the go-pdf/fpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the go-pdf/fpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.SummaryCompare() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package rdns implements a variety of functionality to make DNS resulution configurable and extensible. It offers DNS resolvers as well as listeners with a number of protcols such as DNS-over-TLS, DNS-over-HTTP, and plain wire format DNS. In addition it is possible to route queries based on the query name or type. There are 4 fundamental types of objects available in this library. Resolvers implement name resolution with upstream resolvers. All of them implement connection reuse as well as pipelining (sending multiple queries and receiving them out-of-order). Groups typically wrap multiple resolvers and implement failover or load-balancing algorithms accross all resolvers in the group. Groups too are resolvers and can therefore be nested into other groups for more complex query routing. Routers are used to send DNS queries to resolvers, groups, or even other routers based on the query content. As with groups, routers too are resolvers that can be combined to form more advanced configurations. While resolvers handle outgoing queries to upstream servers, listeners are the receivers of queries. Multiple listeners can be started for different protocols and on different ports. Each listener forwards received queries to one resolver, group, or router. This example starts a stub resolver on the local machine which will forward all queries via DNS-over-TLS to provide privacy.
Package goncurses is a new curses (ncurses) library for the Go programming language. It implements all the ncurses extension libraries: form, menu and panel. Minimal operation would consist of initializing the display: It is important to always call End() before your program exits. If you fail to do so, the terminal will not perform properly and will either need to be reset or restarted completely. CAUTION: Calls to ncurses functions are normally not atomic nor reentrant and therefore extreme care should be taken to ensure ncurses functions are not called concurrently. Specifically, never write data to the same window concurrently nor accept input and send output to the same window as both alter the underlying C data structures in a non safe manner. Ideally, you should structure your program to ensure all ncurses related calls happen in a single goroutine. This is probably most easily achieved via channels and Go's built-in select. Alternatively, or additionally, you can use a mutex to protect any calls in multiple goroutines from happening concurrently. Failure to do so will result in unpredictable and undefined behaviour in your program. The examples directory contains demonstrations of many of the capabilities goncurses can provide.
Package debounce provides a debouncer func. The most typical use case would be the user typing a text into a form; the UI needs an update, but let's wait for a break.
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error.
Package zfs implements basic manipulation of ZFS pools and data sets. Use libzfs C library instead CLI zfs tools, with goal to let using and manipulating OpenZFS form with in go project. TODO: Adding to the pool. (Add the given vdevs to the pool) TODO: Scan for pools.
Package nmap parses Nmap XML data into a similary formed struct.
Amino is an encoding library that can handle interfaces (like protobuf "oneof") well. This is achieved by prefixing bytes before each "concrete type". A concrete type is some non-interface value (generally a struct) which implements the interface to be (de)serialized. Not all structures need to be registered as concrete types -- only when they will be stored in interface type fields (or interface type slices) do they need to be registered. All interfaces and the concrete types that implement them must be registered. Notice that an interface is represented by a nil pointer. Structures that must be deserialized as pointer values must be registered with a pointer value as well. It's OK to (de)serialize such structures in non-pointer (value) form, but when deserializing such structures into an interface field, they will always be deserialized as pointers. All registered concrete types are encoded with leading 4 bytes (called "prefix bytes"), even when it's not held in an interface field/element. In this way, Amino ensures that concrete types (almost) always have the same canonical representation. The first byte of the prefix bytes must not be a zero byte, so there are 2**(8*4)-2**(8*3) possible values. When there are 4096 types registered at once, the probability of there being a conflict is ~ 0.2%. See https://instacalc.com/51189 for estimation. This is assuming that all registered concrete types have unique natural names (e.g. prefixed by a unique entity name such as "com.tendermint/", and not "mined/grinded" to produce a particular sequence of "prefix bytes"). TODO Update instacalc.com link with 255/256 since 0x00 is an escape. Do not mine/grind to produce a particular sequence of prefix bytes, and avoid using dependencies that do so. Since 4 bytes are not sufficient to ensure no conflicts, sometimes it is necessary to prepend more than the 4 prefix bytes for disambiguation. Like the prefix bytes, the disambiguation bytes are also computed from the registered name of the concrete type. There are 3 disambiguation bytes, and in binary form they always precede the prefix bytes. The first byte of the disambiguation bytes must not be a zero byte, so there are 2**(8*3)-2**(8*2) possible values. The prefix bytes never start with a zero byte, so the disambiguation bytes are escaped with 0x00. Notice that the 4 prefix bytes always immediately precede the binary encoding of the concrete type. To compute the disambiguation bytes, we take `hash := sha256(concreteTypeName)`, and drop the leading 0x00 bytes. In the example above, hash has two leading 0x00 bytes, so we drop them. The first 3 bytes are called the "disambiguation bytes" (in angle brackets). The next 4 bytes are called the "prefix bytes" (in square brackets).
Package recaptcha handles reCaptcha (http://www.google.com/recaptcha) form submissions This package is designed to be called from within an HTTP server or web framework which offers reCaptcha form inputs and requires them to be evaluated for correctness Edit the recaptchaPrivateKey constant before building and using
Package fetchbot provides a simple and flexible web crawler that follows the robots.txt policies and crawl delays. It is very much a rewrite of gocrawl (https://github.com/PuerkitoBio/gocrawl) with a simpler API, less features built-in, but at the same time more flexibility. As for Go itself, sometimes less is more! To install, simply run in a terminal: The package has a single external dependency, robotstxt (https://github.com/temoto/robotstxt). It also integrates code from the iq package (https://github.com/kylelemons/iq). The API documentation is available on godoc.org (http://godoc.org/github.com/PuerkitoBio/fetchbot). The following example (taken from /example/short/main.go) shows how to create and start a Fetcher, one way to send commands, and how to stop the fetcher once all commands have been handled. A more complex and complete example can be found in the repository, at /example/full/. Basically, a Fetcher is an instance of a web crawler, independent of other Fetchers. It receives Commands via the Queue, executes the requests, and calls a Handler to process the responses. A Command is an interface that tells the Fetcher which URL to fetch, and which HTTP method to use (i.e. "GET", "HEAD", ...). A call to Fetcher.Start() returns the Queue associated with this Fetcher. This is the thread-safe object that can be used to send commands, or to stop the crawler. Both the Command and the Handler are interfaces, and may be implemented in various ways. They are defined like so: A Context is a struct that holds the Command and the Queue, so that the Handler always knows which Command initiated this call, and has a handle to the Queue. A Handler is similar to the net/http Handler, and middleware-style combinations can be built on top of it. A HandlerFunc type is provided so that simple functions with the right signature can be used as Handlers (like net/http.HandlerFunc), and there is also a multiplexer Mux that can be used to dispatch calls to different Handlers based on some criteria. The Fetcher recognizes a number of interfaces that the Command may implement, for more advanced needs. * BasicAuthProvider: Implement this interface to specify the basic authentication credentials to set on the request. * CookiesProvider: If the Command implements this interface, the provided Cookies will be set on the request. * HeaderProvider: Implement this interface to specify the headers to set on the request. * ReaderProvider: Implement this interface to set the body of the request, via an io.Reader. * ValuesProvider: Implement this interface to set the body of the request, as form-encoded values. If the Content-Type is not specifically set via a HeaderProvider, it is set to "application/x-www-form-urlencoded". ReaderProvider and ValuesProvider should be mutually exclusive as they both set the body of the request. If both are implemented, the ReaderProvider interface is used. * Handler: Implement this interface if the Command's response should be handled by a specific callback function. By default, the response is handled by the Fetcher's Handler, but if the Command implements this, this handler function takes precedence and the Fetcher's Handler is ignored. Since the Command is an interface, it can be a custom struct that holds additional information, such as an ID for the URL (e.g. from a database), or a depth counter so that the crawling stops at a certain depth, etc. For basic commands that don't require additional information, the package provides the Cmd struct that implements the Command interface. This is the Command implementation used when using the various Queue.SendString\* methods. There is also a convenience HandlerCmd struct for the commands that should be handled by a specific callback function. It is a Command with a Handler interface implementation. The Fetcher has a number of fields that provide further customization: * HttpClient : By default, the Fetcher uses the net/http default Client to make requests. A different client can be set on the Fetcher.HttpClient field. * CrawlDelay : That value is used only if there is no delay specified by the robots.txt of a given host. * UserAgent : Sets the user agent string to use for the requests and to validate against the robots.txt entries. * WorkerIdleTTL : Sets the duration that a worker goroutine can wait without receiving new commands to fetch. If the idle time-to-live is reached, the worker goroutine is stopped and its resources are released. This can be especially useful for long-running crawlers. * AutoClose : If true, closes the queue automatically once the number of active hosts reach 0. * DisablePoliteness : If true, ignores the robots.txt policies of the hosts. What fetchbot doesn't do - especially compared to gocrawl - is that it doesn't keep track of already visited URLs, and it doesn't normalize the URLs. This is outside the scope of this package - all commands sent on the Queue will be fetched. Normalization can easily be done (e.g. using https://github.com/PuerkitoBio/purell) before sending the Command to the Fetcher. How to keep track of visited URLs depends on the use-case of the specific crawler, but for an example, see /example/full/main.go. The BSD 3-Clause license (http://opensource.org/licenses/BSD-3-Clause), the same as the Go language. The iq_slice.go file is under the CDDL-1.0 license (details in the source file).
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Golog aspires to be an ISO Prolog interpreter. It currently supports a small subset of the standard. Any deviations from the standard are bugs. Typical usage looks something like this: This sample highlights a few key aspects of using Golog. To start, Golog data structures are immutable. NewMachine() creates an empty Golog machine containing just the standard library. Consult() creates another new machine with some extra code loaded. The original, empty machine is untouched. It's common to build a large Golog machine during init() and then add extra rules to it at runtime. Because Golog machines are immutable, multiple goroutines can access, run and "modify" machines in parallel. This design also opens possibilities for and-parallel and or-parallel execution. Most methods, like Consult(), can accept Prolog code in several forms. The example above shows Prolog as a string. We could have used any io.Reader instead. Error handling is one oddity. Golog methods follow Go convention by returning an error value to indicate that something went wrong. However, in many cases the caller knows that an error is highly improbable and doesn't want extra code to deal with the common case. For most methods, Golog offers one with a trailing underscore, like ByName_(), which panics on error instead of returning an error value. See also:
Package generator provides the code generation library for go-swagger. The general idea is that you should rarely see interface{} in the generated code. You get a complete representation of a swagger document in somewhat idiomatic go. To do so, there is a set of mapping patterns that are applied, to map a Swagger specification to go types: NOTE: anyOf and oneOf JSON-schema constructs are not supported by Swagger 2.0 A property on a definition is a pointer when any one of the following conditions is met: JSONSchema and by extension Swagger allow for items that have a fixed size array, with the schema describing the items at each index. This can be combined with additional items to form some kind of tuple with varargs. To map this to go it creates a struct that has fixed names and a custom json serializer. NOTE: the additionalItems keyword is not supported by Swagger 2.0. However, the generator and validator parts in go-swagger do. The code that is generated also gets the doc comments that are used by the scanner to generate a spec from go code. So that after generation you should be able to reverse generate a spec from the code that was generated by your spec. It should be equivalent to the original spec but might miss some default values and examples.
Package cron implements a cron spec parser and runner. Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. By default, all interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package http://www.golang.org/pkg/time). The time zone may be overridden by providing an additional space-separated field at the beginning of the cron spec, of the form "TZ=Asia/Tokyo" Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Copyright (c) 2018-2019 The kaspanet developers Copyright (c) 2013-2018 The btcsuite developers Copyright (c) 2015-2016 The Decred developers Copyright (c) 2013-2014 Conformal Systems LLC. Use of this source code is governed by an ISC license that can be found in the LICENSE file. Kaspad is a full-node kaspa implementation written in Go. The default options are sane for most users. This means kaspad will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. Usage: For an up-to-date help message: The long form of all option flags (except -C) can be specified in a configuration file that is automatically parsed when kaspad starts up. By default, the configuration file is located at ~/.kaspad/kaspad.conf on POSIX-style operating systems and %LOCALAPPDATA%\kaspad\kaspad.conf on Windows. The -C (--configfile) flag can be used to override this location.
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. This library is supported on GoLang 1.7 and higher. We have implemented Go Modules support and the CI pipeline shows it working with new installs, tested with Go 1.13. To keep 1.7 compatibility, we use `go mod vendor` to maintain the `vendor/` folder for older 1.7 and later runtimes. If either runtime has an issue, please create an Issue and I will address. For version 1.x, you are not restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields with iTunes specific fields all prefixed with the letter `I`. However, do note that the 2.x version currently in progress will break this extensibility and enforce API methods going forward. This is to ensure that the feed can both be marshalled, and unmarshalled back and forth (current 1.x branch can only be unmarshalled - hence the work for 2.x). `go-fuzz` has been added in 1.4.1, covering all exported API methods. They have been ran extensively and no issues have come out of them yet (most tests were ran overnight, over about 11 hours with zero crashes). If you wish to help fuzz the inputs, with Go 1.13 or later you can run `go-fuzz` on any of the inputs. To obtain a list of available funcs to pass, just run `go-fuzz` without any parameters: If you do find an issue, please raise an issue immediately and I will quickly address. The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. v1.4.2 v1.4.1 v1.4.0 v1.3.2 v1.3.1 v1.3.0 v1.2.1 v1.2.0 v1.1.0 v1.0.0 RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233
A process model for go. Ifrit is a small set of interfaces for composing single-purpose units of work into larger programs. Users divide their program into single purpose units of work, each of which implements the `Runner` interface Each `Runner` can be invoked to create a `Process` which can be monitored and signaled to stop. The name Ifrit comes from a type of daemon in arabic folklore. It's a play on the unix term 'daemon' to indicate a process that is managed by the init system. Ifrit ships with a standard library which contains packages for common processes - http servers, integration test helpers - alongside packages which model process supervision and orchestration. These packages can be combined to form complex servers which start and shutdown cleanly. The advantage of small, single-responsibility processes is that they are simple, and thus can be made reliable. Ifrit's interfaces are designed to be free of race conditions and edge cases, allowing larger orcestrated process to also be made reliable. The overall effect is less code and more reliability as your system grows with grace.
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Package dom provides Go bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already usable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behavior, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set Cookie: You can add custom cookie via SetCookie func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package edn implements encoding and decoding of EDN values as defined in https://github.com/edn-format/edn. For a full introduction on how to use go-edn, see https://github.com/go-edn/edn/blob/v1/docs/introduction.md. Fully self-contained examples of go-edn can be found at https://github.com/go-edn/edn/tree/v1/examples. Note that the small examples in this package is not checking errors as persively as you should do when you use this package. This is done because I'd like the examples to be easily readable and understandable. The bigger examples provide proper error handling. EDN, in contrast to JSON, supports arbitrary values as keys. This example shows how one can implement enums and sets, and how to support multiple different forms for a specific value type. The set implemented here supports the notation `:all` for all values. This example shows how to read and write basic EDN tags, and how this can be utilised: In contrast to encoding/json, you can read in data where you only know that the input satisfies some sort of interface, provided the value is tagged. This example shows how one can do streaming with the decoder, and how to properly know when the stream has no elements left.
Package nmap parses Nmap XML data into a similary formed struct.
Package jump implements the "jump consistent hash" algorithm. Example Reference C++ implementation[1] Jump consistent hash works by computing when its output changes as the number of buckets increases. Let ch(key, num_buckets) be the consistent hash for the key when there are num_buckets buckets. Clearly, for any key, k, ch(k, 1) is 0, since there is only the one bucket. In order for the consistent hash function to balanced, ch(k, 2) will have to stay at 0 for half the keys, k, while it will have to jump to 1 for the other half. In general, ch(k, n+1) has to stay the same as ch(k, n) for n/(n+1) of the keys, and jump to n for the other 1/(n+1) of the keys. Here are examples of the consistent hash values for three keys, k1, k2, and k3, as num_buckets goes up: A linear time algorithm can be defined by using the formula for the probability of ch(key, j) jumping when j increases. It essentially walks across a row of this table. Given a key and number of buckets, the algorithm considers each successive bucket, j, from 1 to num_buckets1, and uses ch(key, j) to compute ch(key, j+1). At each bucket, j, it decides whether to keep ch(k, j+1) the same as ch(k, j), or to jump its value to j. In order to jump for the right fraction of keys, it uses a pseudorandom number generator with the key as its seed. To jump for 1/(j+1) of keys, it generates a uniform random number between 0.0 and 1.0, and jumps if the value is less than 1/(j+1). At the end of the loop, it has computed ch(k, num_buckets), which is the desired answer. In code: We can convert this to a logarithmic time algorithm by exploiting that ch(key, j+1) is usually unchanged as j increases, only jumping occasionally. The algorithm will only compute the destinations of jumps the j’s for which ch(key, j+1) ≠ ch(key, j). Also notice that for these j’s, ch(key, j+1) = j. To develop the algorithm, we will treat ch(key, j) as a random variable, so that we can use the notation for random variables to analyze the fractions of keys for which various propositions are true. That will lead us to a closed form expression for a pseudorandom variable whose value gives the destination of the next jump. Suppose that the algorithm is tracking the bucket numbers of the jumps for a particular key, k. And suppose that b was the destination of the last jump, that is, ch(k, b) ≠ ch(k, b+1), and ch(k, b+1) = b. Now, we want to find the next jump, the smallest j such that ch(k, j+1) ≠ ch(k, b+1), or equivalently, the largest j such that ch(k, j) = ch(k, b+1). We will make a pseudorandom variable whose value is that j. To get a probabilistic constraint on j, note that for any bucket number, i, we have j ≥ i if and only if the consistent hash hasn’t changed by i, that is, if and only if ch(k, i) = ch(k, b+1). Hence, the distribution of j must satisfy Fortunately, it is easy to compute that probability. Notice that since P( ch(k, 10) = ch(k, 11) ) is 10/11, and P( ch(k, 11) = ch(k, 12) ) is 11/12, then P( ch(k, 10) = ch(k, 12) ) is 10/11 * 11/12 = 10/12. In general, if n ≥ m, P( ch(k, n) = ch(k, m) ) = m / n. Thus for any i > b, Now, we generate a pseudorandom variable, r, (depending on k and j) that is uniformly distributed between 0 and 1. Since we want P(j ≥ i) = (b+1) / i, we set P(j ≥ i) iff r ≤ (b+1) / i. Solving the inequality for i yields P(j ≥ i) iff i ≤ (b+1) / r. Since i is a lower bound on j, j will equal the largest i for which P(j ≥ i), thus the largest i satisfying i ≤ (b+1) / r. Thus, by the definition of the floor function, j = floor((b+1) / r). Using this formula, jump consistent hash finds ch(key, num_buckets) by choosing successive jump destinations until it finds a position at or past num_buckets. It then knows that the previous jump destination is the answer. To turn this into the actual code of figure 1, we need to implement random. We want it to be fast, and yet to also to have well distributed successive values. We use a 64bit linear congruential generator; the particular multiplier we use produces random numbers that are especially well distributed in higher dimensions (i.e., when successive random values are used to form tuples). We use the key as the seed. (For keys that don’t fit into 64 bits, a 64 bit hash of the key should be used.) The congruential generator updates the seed on each iteration, and the code derives a double from the current seed. Tests show that this generator has good speed and distribution. It is worth noting that unlike the algorithm of Karger et al., jump consistent hash does not require the key to be hashed if it is already an integer. This is because jump consistent hash has an embedded pseudorandom number generator that essentially rehashes the key on every iteration. The hash is not especially good (i.e., linear congruential), but since it is applied repeatedly, additional hashing of the input key is not necessary. [1] http://arxiv.org/pdf/1406.2294v1.pdf
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/jezek/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package inject provides a reflect based injector. A large application built with dependency injection in mind will typically involve the boring work of setting up the object graph. This library attempts to take care of this boring work by creating and connecting the various objects. Its use involves you seeding the object graph with some (possibly incomplete) objects, where the underlying types have been tagged for injection. Given this, the library will populate the objects creating new ones as necessary. It uses singletons by default, supports optional private instances as well as named instances. It works using Go's reflection package and is inherently limited in what it can do as opposed to a code-gen system with respect to private fields. The usage pattern for the library involves struct tags. It requires the tag format used by the various standard libraries, like json, xml etc. It involves tags in one of the three forms below: The first no value syntax is for the common case of a singleton dependency of the associated type. The second triggers creation of a private instance for the associated type. Finally the last form is asking for a named dependency called "dev logger".
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package gorilla/http is a high level HTTP client. This package provides high level convience methods for common http operations. Additionally a high level HTTP client implementation. These high level functions are expected to change. Your feedback on their form and utility is warmly requested. Please raise issues at https://github.com/gorilla/http/issues. For lower level http implementations, see gorilla/http/client.
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Stop() function is called or when Ctrl-C is pressed. If we have a primitive which consumes key presses, we call the application's SetFocus() function to redirect all key presses to that primitive. Most primitives then offer ways to install handlers that allow you to react to any actions performed on them. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, colors are specified using the tcell.Color type. Functions such as tcell.GetColor(), tcell.NewHexColor(), and tcell.NewRGBColor() can be used to create colors from W3C color names or RGB values. Almost all strings which are displayed can contain color tags. Color tags are W3C color names or six hexadecimal digits following a hash tag, wrapped in square brackets. Examples: A color tag changes the color of the characters following that color tag. This applies to almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. Color tags may contain not just the foreground (text) color but also the background color and additional flags. In fact, the full definition of a color tag is as follows: Each of the three fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered color tags.) Colors that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags (some flags may not be supported by your terminal): Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. This package supports unicode characters including wide characters. Many functions in this package are not thread-safe. For many applications, this may not be an issue: If your code makes changes in response to key events, it will execute in the main goroutine and thus will not cause any race conditions. If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate() or Application.QueueUpdateDraw() (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw() from any goroutine without having to wrap it in QueueUpdate(). And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use QueueUpdate() as that may lead to deadlocks. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. All widgets also implement the Primitive interface. The tview package is based on https://github.com/derailed/tcell. It uses types and constants from that package (e.g. colors and keyboard values). This package does not process mouse input (yet).
Package conf provides support for using environmental variables and command line arguments for configuration. It is compatible with the GNU extensions to the POSIX recommendations for command-line options. See http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html There are no hard bindings for this package. This package takes a struct value and parses it for both the environment and flags. It supports several tags to customize the flag options. The field name and any parent struct name will be used for the long form of the command name unless the name is overridden. As an example, using this config struct: The following usage information would be output you can display. Usage: conf.test [options] [arguments] OPTIONS There is an API called Parse that can process a config struct with environment variable and command line flag overrides. There is also YAML support using the yaml package that is part of this modeule. There is a WithParse function that takes a slice of bytes containing the YAML or WithParseReader that takes any concrete value that knows how to Read. Additionally, if the config struct has a field of the slice type conf.Args then it will be populated with any remaining arguments from the command line after flags have been processed. For example a program with a config struct like this: If that program is executed from the command line like this: Then the cfg.Args field will contain the string values ["serve", "http"]. The Args type has a method Num for convenient access to these arguments such as this: You can add a version with a description by adding the Version type to your config type and set these values at run time for display.
Package godb is query builder and struct mapper. godb does not manage relationships like Active Record or Entity Framework, it's not a full-featured ORM. Its goal is to be more productive than manually doing mapping between Go structs and databases tables. godb needs adapters to use databases, some are packaged with godb for : Start with an adapter, and the Open method which returns a godb.DB pointer : There are three ways to executes SQL with godb : Using raw queries you can execute any SQL queries and get the results into a slice of structs (or single struct) using the automatic mapping. Structs tools looks more 'orm-ish' as they're take instances of objects or slices to run select, insert, update and delete. Statements tools stand between raw queries and structs tools. It's easier to use than raw queries, but are limited to simpler cases. The statements tools are based on types : Example : The SelectStatement type could also build a query using columns from a structs. It facilitates the build of queries returning values from multiple table (or views). See struct mapping explanations, in particular the `rel` part. Example : The structs tools are based on types : Examples : Raw queries are executed using the RawSQL type. The query could be a simple hand-written string, or something complex builded using SQLBuffer and Conditions. Example : Stucts contents are mapped to databases columns with tags, like in previous example with the Book struct. The tag is 'db' and its content is : For autoincrement identifier simple use both 'key' and 'auto'. Example : More than one field could have the 'key' keyword, but with most databases drivers none of them could have the 'auto' keyword, because executing an insert query only returns one value : the last inserted id : https://golang.org/pkg/database/sql/driver/#RowsAffected.LastInsertId . With PostgreSQL you cas have multiple fields with 'key' and 'auto' options. Structs could be nested. A nested struct is mapped only if has the 'db' tag. The tag value is a columns prefix applied to all fields columns of the struct. The prefix is not mandatory, a blank string is allowed (no prefix). A nested struct could also have an optionnal `rel` attribute of the form `rel=relationname`. It's useful to build a select query using multiples relations (table, view, ...). See the example using the BooksWithInventories type. Example Databases columns are : The mapping is managed by the 'dbreflect' subpackage. Normally its direct use is not necessary, except in one case : some structs are scannable and have to be considered like fields, and mapped to databases columns. Common case are time.Time, or sql.NullString, ... You can register a custom struct with the `RegisterScannableStruct` and a struct instance, for example the time.Time is registered like this : The structs statements use the struct name as table name. But you can override this simply by simplementing a TableName method : Statements and structs tools manage 'where' and 'group by' sql clauses. These conditional clauses are build either with raw sql code, or build with the Condition struct like this : WhereQ methods take a Condition instance build by godb.Q . Where mathods take raw SQL, but is just a syntactic sugar. These calls are equivalents : Multiple calls to Where or WhereQ are allowed, these calls are equivalents : Slices are managed in a particular way : a single placeholder is replaced with multiple ones. This allows code like : The SQLBuffer exists to ease the build of complex raw queries. It's also used internaly by godb. Its use and purpose are simple : concatenate sql parts (accompagned by their arguments) in an efficient way. Example : For all databases, structs updates and deletes manage optimistic locking when a dedicated integer row is present. Simply tags it with `oplock` : When an update or delete operation fails, Do() returns the `ErrOpLock` error. With PostgreSQL and SQL Server, godb manages optimistic locking with automatic fields. Just add a dedicated field in the struct and tag it with `auto,oplock`. With PostgreSQL you can use the `xmin` system column like this : For more informations about `xmin` see https://www.postgresql.org/docs/10/static/ddl-system-columns.html With SQL Server you can use a `rowversion` field with the `mssql.Rowversion` type like this : For more informations about the `rowversion` data type see https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql godb keep track of time consumed while executing queries. You can reset it and get the time consumed since Open or the previous reset : You can log all executed queried and details of condumed time. Simply add a logger : godb takes advantage of PostgreSQL RETURNING clause, and SQL Server OUTPUT clause. With statements tools you have to add a RETURNING clause with the Suffix method and call DoWithReturning method instead of Do(). It's optionnal. With StructInsert it's transparent, the RETURNING or OUTPUT clause is added for all 'auto' columns and it's managed for you. One of the big advantage is with BulkInsert : for others databases the rows are inserted but the new keys are unkonwns. With PostgreSQL and SQL Server the slice is updated for all inserted rows. It also enables optimistic locking with *automatic* columns. godb has two prepared statements caches, one to use during transactions, and one to use outside of a transaction. Both use a LRU algorithm. The transaction cache is enabled by default, but not the other. A transaction (sql.Tx) isn't shared between goroutines, using prepared statement with it has a predictable behavious. But without transaction a prepared statement could have to be reprepared on a different connection if needed, leading to unpredictable performances in high concurrency scenario. Enabling the non transaction cache could improve performances with single goroutine batch. With multiple goroutines accessing the same database : it depends ! A benchmark would be wise. Using statements tools and structs tools you can execute select queries and get an iterator instead of filling a slice of struct instances. This could be useful if the request's result is big and you don't want to allocate too much memory. On the other side you will write almost as much code as with the `sql` package, but with an automatic struct mapping, and a request builder. Iterators are also available with raw queries. In this cas you cas executes any kind of sql code, not just select queries. To get an interator simply use the `DoWithIterator` method instead of `Do`. The iterator usage is similar to the standard `sql.Rows` type. Don't forget to check that there are no errors with the `Err` method, and don't forget to call `Close` when the iterator is no longer useful, especially if you don't scan all the resultset. To avoid performance cost godb.DB does not implement synchronization. So a given instance of godb.DB should not be used by multiple goroutines. But a godb.DB instance can be created and used as a blueprint and cloned for each goroutine. See Clone and Clear methods. A typical use case is a web server. When the application starts a godb.DB is created, and cloned in each http handler with Clone, and ressources are to be freed calling Clear (use defer statement).
ltcd is a full-node Litecoin implementation written in Go. The default options are sane for most users. This means ltcd will work 'out of the box' for most users. However, there are also a wide variety of flags that can be used to control it. The following section provides a usage overview which enumerates the flags. An interesting point to note is that the long form of all of these options (except -C) can be specified in a configuration file that is automatically parsed when ltcd starts up. By default, the configuration file is located at ~/.ltcd/ltcd.conf on POSIX-style operating systems and %LOCALAPPDATA%\ltcd\ltcd.conf on Windows. The -C (--configfile) flag, as shown below, can be used to override this location. Usage: Application Options: Help Options: