Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
kms is a package that provides key management system features for go-kms-wrapping Wrappers. The following domain terms are key to understanding the system and how to use it: wrapper: all keys within the system are a Wrapper from go-kms-wrapping. root external wrapper: the external wrapper that will serve as the root of trust for the kms system. Typically you'd get this root wrapper via go-kms-wrapper from a KMS provider. See the go-kms-wrapper docs for further details. scope: a scope defines a rotational boundary for a set of keys. The system tracks only the scope identifier and which is used to find keys with a specific scope. **IMPORTANT**: You should define a FK from kms_root_key.scope_id with cascading deletes and updates to the PK of the table within your domain that tracks scopes. This FK will prevent orphaned kms keys. For example, you could assign organizations and projects scope IDs and then associate a set of keys with each org and project within your domain. root key: the KEKs (keys to encrypt keys) of the system. data key: the DEKs (keys to encrypt data) of the system and must have a parent root key and a purpose. purpose: Each data key (DEK) must have a one purpose. For example, you could define a purpose of client-secrets for a DEK that will be used encrypt/decrypt wrapper operations on `client-secrets` You'll find the database schema within the migrations directory. Currently postgres and sqlite are supported. The implementation does make some use of triggers to ensure some of its data integrity. The migrations are intended to be incorporated into your existing go-migrate migrations. Feel free to change the migration file names, as long as they are applied in the same order as defined here. FYI, the migrations include `kms_version` table which is used to ensure that the schema and module are compatible.
Package edgedb is the official Go driver for EdgeDB. Additionally, github.com/edgedb/edgedb-go/cmd/edgeql-go is a code generator that generates go functions from edgeql files. Typical client usage looks like this: We recommend using environment variables for connection parameters. See the client connection docs for more information. You may also connect to a database using a DSN: Or you can use Option fields. edgedb never returns underlying errors directly. If you are checking for things like context expiration use errors.Is() or errors.As(). Most errors returned by the edgedb package will satisfy the edgedb.Error interface which has methods for introspecting. The following list shows the marshal/unmarshal mapping between EdgeDB types and go types: Note that EdgeDB's std::duration type is represented in int64 microseconds while go's time.Duration type is int64 nanoseconds. It is incorrect to cast one directly to the other. Shape fields that are not required must use optional types for receiving query results. The edgedb.Optional struct can be embedded to make structs optional. Not all types listed above are valid query parameters. To pass a slice of scalar values use array in your query. EdgeDB doesn't currently support using sets as parameters. Nested structures are also not directly allowed but you can use json instead. By default EdgeDB will ignore embedded structs when marshaling/unmarshaling. To treat an embedded struct's fields as part of the parent struct's fields, tag the embedded struct with `edgedb:"$inline"`. Interfaces for user defined marshaler/unmarshalers are documented in the internal/marshal package. Link properties are treated as fields in the linked to struct, and the @ is omitted from the field's tag.
Package gomusicbrainz implements a MusicBrainz WS2 client library. MusicBrainz WS2 (Version 2 of the XML Web Service) supports three different requests: With search requests you can search MusicBrainz´ database for all entities. GoMusicBrainz implements one search method for every search request in the form: searchTerm follows the Apache Lucene syntax and can either contain multiple fields with logical operators or just a simple search string. Please refer to https://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description for more details on the lucene syntax. limit defines how many entries should be returned (1-100, default 25). offset is used for paging through more than one page of results. To ignore limit and/or offset, set it to -1. You can perform a lookup of an entity when you have the MBID for that entity. GoMusicBrainz provides two ways to perform lookup requests: Either the specific lookup method that is implemented for each entity that has a lookup endpoint in the form or the common lookup method if you already have an entity (with MBID) that implements the MBLookupEntity interface: With both methods you can include inc params which affect subqueries e.g. relationships. see http://musicbrainz.org/doc/Development/XML_Web_Service/Version_2#inc.3D_arguments_which_affect_subqueries Not all of them are supported yet. not supported yet.
Package ora implements an Oracle database driver. An Oracle database may be accessed through the database/sql package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetDrvCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The Stmt.Prep method is variadic accepting zero or more GoColumnType which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the example file, test files and samples folder. The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Copyright 2015 Rana Ian. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package ravendb implements a driver for RavenDB NOSQL document database. For more documentation see https://github.com/ravendb/ravendb-go-client/blob/master/readme.md
Package migration automatically handles versioning of a database schema by applying a series of migrations supplied by the client. It uses features only from the database/sql package, so it tries to be driver independent. However, to track the version of the database, it is necessary to execute some SQL. I've made an effort to keep those queries simple, but if they don't work with your database, you may override them. This package works by applying a series of migrations to a database. Once a migration is created, it should never be changed. Every time a database is opened with this package, all necessary migrations are executed in a single transaction. If any part of the process fails, an error is returned and the transaction is rolled back so that the database is left untouched. (Note that for this to be useful, you'll need to use a database that supports rolling back changes to your schema. Notably, MySQL does not support this, although SQLite and PostgreSQL do.) The version of a database is defined as the number of migrations applied to it.
Package dbsql implements the go driver to Databricks SQL Clients should use the database/sql package in conjunction with the driver: Use sql.Open() to create a database handle via a data source name string: The DSN format is: Supported optional connection parameters can be specified in param=value and include: Supported optional session parameters can be specified in param=value and include: Use sql.OpenDB() to create a database handle via a new connector object created with dbsql.NewConnector(): Supported functional options include: Cancelling a query via context cancellation or timeout is supported. Use the driverctx package under driverctx/ctx.go to add CorrelationId and ConnId to the context. CorrelationId and ConnId makes it convenient to parse and create metrics in logging. **Connection Id** Internal id to track what happens under a connection. Connections can be reused so this would track across queries. **Query Id** Internal id to track what happens under a query. Useful because the same query can be used with multiple connections. **Correlation Id** External id, such as request ID, to track what happens under a request. Useful to track multiple connections in the same request. Use the logger package under logger.go to set up logging (from zerolog). By default, logging level is `warn`. If you want to disable logging, use `disabled`. The user can also utilize Track() and Duration() to custom log the elapsed time of anything tracked. The result log may look like this: Use the driverctx package under driverctx/ctx.go to add callbacks to the query context to receive the connection id and query id. Passing parameters to a query is supported when run against servers with version DBR 14.1. For complex types, you can specify the SQL type using the dbsql.Parameter type field. If this field is set, the value field MUST be set to a string. Please note that named and positional parameters cannot be used together in the single query. The Go driver now supports staging operations. In order to use a staging operation, you first must update the context with a list of folders that you are allowing the driver to access. After doing so, you can execute staging operations using this context using the exec context. There are three error types exposed via dbsql/errors Each type has a corresponding sentinel value which can be used with errors.Is() to determine if one of the types is present in an error chain. Example usage: See the documentation for dbsql/errors for more information. The driver supports the ability to retrieve Apache Arrow record batches. To work with record batches it is necessary to use sql.Conn.Raw() to access the underlying driver connection to retrieve a driver.Rows instance. The driver exposes two public interfaces for working with record batches from the rows sub-package: The driver.Rows instance retrieved using Conn.Raw() can be converted to a Databricks Rows instance via a type assertion, then use GetArrowBatches() to retrieve a batch iterator. If the ArrowBatchIterator is not closed it will leak resources, such as the underlying connection. Calling code must call Release() on records returned by DBSQLArrowBatchIterator.Next(). Example usage: ================================== Databricks Type --> Golang Type ================================== BOOLEAN --> bool TINYINT --> int8 SMALLINT --> int16 INT --> int32 BIGINT --> int64 FLOAT --> float32 DOUBLE --> float64 VOID --> nil STRING --> string DATE --> time.Time TIMESTAMP --> time.Time DECIMAL(p,s) --> sql.RawBytes BINARY --> sql.RawBytes ARRAY<elementType> --> sql.RawBytes STRUCT --> sql.RawBytes MAP<keyType, valueType> --> sql.RawBytes INTERVAL (year-month) --> string INTERVAL (day-time) --> string For ARRAY, STRUCT, and MAP types, sql.Scan can cast sql.RawBytes to JSON string, which can be unmarshalled to Golang arrays, maps, and structs. For example: May generate the following row:
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If the query is LWT and the configured RetryPolicy additionally implements LWTRetryPolicy interface, then the policy will be cast to LWTRetryPolicy and used this way. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sessions provides sessions support for net/http unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package cgi implements the common gateway interface (CGI) for Caddy, a modern, full-featured, easy-to-use web server. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server’s responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script’s dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI SCRIPTS SHOULD BE LOCATED OUTSIDE OF CADDY’S DOCUMENT ROOT. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. MISTRUST THE CONTENTS OF PATH_INFO, QUERY_STRING AND STANDARD INPUT. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Additionally, if the Caddy errors directive is enabled, any content the application writes to its standard error stream will be written to the error log. This can be useful to diagnose problems with the execution of the CGI application. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file’s name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application’s ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. The basic cgi directive lets you associate a single pattern with a particular script. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin’s test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy’s process to pass to your script, you will need to use the advanced directive syntax described below. The values used for the script name and its arguments are subject to placeholder replacement. In addition to the standard Caddy placeholders such as {method} and {host}, the following placeholder substitutions are made: - {.} is replaced with Caddy’s current working directory - {match} is replaced with the portion of the request that satisfies the match directive - {root} is replaced with Caddy’s specified root directory You can include glob wildcards in your matches. Basically, an asterisk represents a sequence of zero or more non-slash characters and a question mark represents a single non-slash character. These wildcards can be used multiple times in a match expression. See the documentation for path/Match in the Go standard library for more details about glob matching. Here is an example directive: In this case, the cgi middleware will match requests such as https://example.com/report/weekly.lua and https://example.com/report/report.lua/weekly but not https://example.com/report.lua. The use of the asterisk expands to any character sequence within a directory. For example, if the request is made, the following command is executed: Note that the portion of the request that follows the match is not included. That information is conveyed to the script by means of environment variables. In this example, the Lua interpreter is invoked directly from Caddy, so the Lua script does not need the shebang that would be needed in a standalone script. This method facilitates the use of CGI on the Windows platform. In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, With the advanced syntax, the exec subdirective must appear exactly once. The match subdirective must appear at least once. The env, pass_env, empty_env, and except subdirectives can appear any reasonable number of times. pass_all_env, dir may appear once. The dir subdirective specifies the CGI executable’s working directory. If it is not specified, Caddy’s current working directory is used. The except subdirective uses the same pattern matching logic that is used with the match subdirective except that the request must match a rule fully; no request path prefix matching is performed. Any request that matches a match pattern is then checked with the patterns in except, if any. If any matches are made with the except pattern, the request is rejected and passed along to subsequent handlers. This is a convenient way to have static file resources served properly rather than being confused as CGI applications. The empty_env subdirective is used to pass one or more empty environment variables. Some CGI scripts may expect the server to pass certain empty variables rather than leaving them unset. This subdirective allows you to deal with those situations. The values associated with environment variable keys are all subject to placeholder substitution, just as with the script name and arguments. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. If you protect your CGI application with the Caddy JWT middleware, your program will have access to the token’s payload claims by means of environment variables. For example, the following token claims will be available with the following environment variables All values are conveyed as strings, so some conversion may be necessary in your program. No placeholder substitutions are made on these values. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: The fossil distributed software management tool is a native executable that supports interaction as a CGI application. In this example, /usr/bin/fossil is the executable and /home/quixote/projects.fossil is the fossil repository. To configure Caddy to serve it, use a cgi directive something like this in your Caddyfile: In your /usr/local/cgi-bin directory, make a file named projects with the following single line: The fossil documentation calls this a command file. When fossil is invoked after a request to /projects, it examines the relevant environment variables and responds as a CGI application. If you protect /projects with basic HTTP authentication, you may wish to enable the ALLOW REMOTE_USER AUTHENTICATION option when setting up fossil. This lets fossil dispense with its own authentication, assuming it has an account for the user. The agedu utility can be used to identify unused files that are taking up space on your storage media. Like fossil, it can be used in different modes including CGI. First, use it from the command line to generate an index of a directory, for example In your Caddyfile, include a directive that references the generated index: You will want to protect the /agedu resource with some sort of access control, for example HTTP Basic Authentication. This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The cgit application provides an attractive and useful web interface to git repositories. Here is how to run it with Caddy. After compiling cgit, you can place the executable somewhere out of Caddy’s document root. In this example, it is located in /usr/local/cgi-bin. A sample configuration file is included in the project’s cgitrc.5.txt file. You can use it as a starting point for your configuration. The default location for this file is /etc/cgitrc but in this example the location /home/quixote/caddy/cgitrc. Note that changing the location of this file from its default will necessitate the inclusion of the environment variable CGIT_CONFIG in the Caddyfile cgi directive. When you edit the repository stanzas in this file, be sure each repo.path item refers to the .git directory within a working checkout. Here is an example stanza: Also, you will likely want to change cgit’s cache directory from its default in /var/cache (generally accessible only to root) to a location writeable by Caddy. In this example, cgitrc contains the line You may need to create the cgit subdirectory. There are some static cgit resources (namely, cgit.css, favicon.ico, and cgit.png) that will be accessed from Caddy’s document tree. For this example, these files are placed in a directory named cgit-resource. The following lines are part of the cgitrc file: Additionally, you will likely need to tweak the various file viewer filters such source-filter and about-filter based on your system. The following Caddyfile directive will allow you to access the cgit application at /cgit: Feeling reckless? You can run PHP in CGI mode. In general, FastCGI is the preferred method to run PHP if your application has many pages or a fair amount of database activity. But for small PHP programs that are seldom used, CGI can work fine. You’ll need the php-cgi interpreter for your platform. This may involve downloading the executable or downloading and then compiling the source code. For this example, assume the interpreter is installed as /usr/local/bin/php-cgi. Additionally, because of the way PHP operates in CGI mode, you will need an intermediate script. This one works in Posix environments: This script can be reused for multiple cgi directives. In this example, it is installed as /usr/local/cgi-bin/phpwrap. The argument following -c is your initialization file for PHP. In this example, it is named /home/quixote/.config/php/php-cgi.ini. Two PHP files will be used for this example. The first, /usr/local/cgi-bin/sample/min.php, looks like this: The second, /usr/local/cgi-bin/sample/action.php, follows: The following directive in your Caddyfile will make the application available at sample/min.php: This examples demonstrates printing a CGI rule
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/example/listen.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package applicationinsights provides the API client, operations, and parameter types for Amazon CloudWatch Application Insights. Amazon CloudWatch Application Insights is a service that helps you detect common problems with your applications. It enables you to pinpoint the source of issues in your applications (built with technologies such as Microsoft IIS, .NET, and Microsoft SQL Server), by providing key insights into detected problems. After you onboard your application, CloudWatch Application Insights identifies, recommends, and sets up metrics and logs. It continuously analyzes and correlates your metrics and logs for unusual behavior to surface actionable problems with your application. For example, if your application is slow and unresponsive and leading to HTTP 500 errors in your Application Load Balancer (ALB), Application Insights informs you that a memory pressure problem with your SQL Server database is occurring. It bases this analysis on impactful metrics and log errors.
Package opsworkscm provides the API client, operations, and parameter types for AWS OpsWorks CM. AWS OpsWorks for configuration management (CM) is a service that runs and manages configuration management servers. You can use AWS OpsWorks CM to create and manage AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise servers, and add or remove nodes for the servers to manage. Glossary of terms Server: A configuration management server that can be highly-available. The configuration management server runs on an Amazon Elastic Compute Cloud (EC2) instance, and may use various other AWS services, such as Amazon Relational Database Service (RDS) and Elastic Load Balancing. A server is a generic abstraction over the configuration manager that you want to use, much like Amazon RDS. In AWS OpsWorks CM, you do not start or stop servers. After you create servers, they continue to run until they are deleted. Engine: The engine is the specific configuration manager that you want to use. Valid values in this release include ChefAutomate and Puppet . Backup: This is an application-level backup of the data that the configuration manager stores. AWS OpsWorks CM creates an S3 bucket for backups when you launch the first server. A backup maintains a snapshot of a server's configuration-related attributes at the time the backup starts. Events: Events are always related to a server. Events are written during server creation, when health checks run, when backups are created, when system maintenance is performed, etc. When you delete a server, the server's events are also deleted. Account attributes: Every account has attributes that are assigned in the AWS OpsWorks CM database. These attributes store information about configuration limits (servers, backups, etc.) and your customer account. AWS OpsWorks CM supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Your servers can only be accessed or managed within the endpoint in which they are created. opsworks-cm.us-east-1.amazonaws.com opsworks-cm.us-east-2.amazonaws.com opsworks-cm.us-west-1.amazonaws.com opsworks-cm.us-west-2.amazonaws.com opsworks-cm.ap-northeast-1.amazonaws.com opsworks-cm.ap-southeast-1.amazonaws.com opsworks-cm.ap-southeast-2.amazonaws.com opsworks-cm.eu-central-1.amazonaws.com opsworks-cm.eu-west-1.amazonaws.com For more information, see AWS OpsWorks endpoints and quotas in the AWS General Reference. All API operations allow for five requests per second with a burst of 10 requests per second.
Package memorydb provides the API client, operations, and parameter types for Amazon MemoryDB. MemoryDB is a fully managed, Redis OSS-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis OSS, a popular open source data store, enabling you to leverage Redis OSS’ flexible and friendly data structures, APIs, and commands.
Package marketplaceentitlementservice provides the API client, operations, and parameter types for AWS Marketplace Entitlement Service. This reference provides descriptions of the AWS Marketplace Entitlement Service API. AWS Marketplace Entitlement Service is used to determine the entitlement of a customer to a given product. An entitlement represents capacity in a product owned by the customer. For example, a customer might own some number of users or seats in an SaaS application or some amount of data capacity in a multi-tenant database. Getting Entitlement Records
Copyright 2022 UP9. All rights reserved. Use of this source code is governed by Apache License 2.0 license that can be found in the LICENSE file. This is the client library for Basenine database server.
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package adbc defines the interfaces for Arrow Database Connectivity. An Arrow-based interface between applications and database drivers. ADBC aims to provide a vendor-independent API for SQL and Substrait-based database access that is targeted at analytics/OLAP use cases. This API is intended to be implemented directly by drivers and used directly by client applications. To assist portability between different vendors, a "driver manager" library is also provided, which implements this same API, but dynamically loads drivers internally and forwards calls appropriately. In general, it's expected for objects to allow serialized access safely from multiple goroutines, but not necessarily concurrent access. Specific implementations may allow concurrent access. EXPERIMENTAL. Interface subject to change.
Package metrics is a telemetry client designed for Uber's software networking team. It prioritizes performance on the hot path and integration with both push- and pull-based collection systems. Like Prometheus and Tally, it supports metrics tagged with arbitrary key-value pairs. Like Prometheus, but unlike Tally, metric names should be relatively long and descriptive - generally speaking, metrics from the same process shouldn't share names. (See the documentation for the Root struct below for a longer explanation of the uniqueness rules.) For example, prefer "grpc_successes_by_procedure" over "successes", since "successes" is common and vague. Where relevant, metric names should indicate their unit of measurement (e.g., "grpc_success_latency_ms"). Counters represent monotonically increasing values, like a car's odometer. Gauges represent point-in-time readings, like a car's speedometer. Both counters and gauges expose not only write operations (set, add, increment, etc.), but also atomic reads. This makes them easy to integrate directly into your business logic: you can use them anywhere you'd otherwise use a 64-bit atomic integer. This package doesn't support analogs of Tally's timer or Prometheus's summary, because they can't be accurately aggregated at query time. Instead, it approximates distributions of values with histograms. These require more up-front work to set up, but are typically more accurate and flexible when queried. See https://prometheus.io/docs/practices/histograms/ for a more detailed discussion of the trade-offs involved. Plain counters, gauges, and histograms have a fixed set of tags. However, it's common to encounter situations where a subset of a metric's tags vary constantly. For example, you might want to track the latency of your database queries by table: you know the database cluster, application name, and hostname at process startup, but you need to specify the table name with each query. To model these situations, this package uses vectors. Each vector is a local cache of metrics, so accessing them is quite fast. Within a vector, all metrics share a common set of constant tags and a list of variable tags. In our database query example, the constant tags are cluster, application, and hostname, and the only variable tag is table name. Usage examples are included in the documentation for each vector type. This package integrates with StatsD- and M3-based collection systems by periodically pushing differential updates. (Users can integrate with other push-based systems by implementing the push.Target interface.) It integrates with pull-based collectors by exposing an HTTP handler that supports Prometheus's text and protocol buffer exposition formats. Examples of both push and pull integration are included in the documentation for the root struct's Push and ServeHTTP methods. If you're unfamiliar with Tally and Prometheus, you may want to consult their documentation:
Package miniredis is a pure Go Redis test server, for use in Go unittests. There are no dependencies on system binaries, and every server you start will be empty. Start a server with `s, err := miniredis.Run()`. Stop it with `defer s.Close()`. Point your Redis client to `s.Addr()` or `s.Host(), s.Port()`. Set keys directly via s.Set(...) and similar commands, or use a Redis client. For direct use you can select a Redis database with either `s.Select(12); s.Get("foo")` or `s.DB(12).Get("foo")`.
Package godo is the DigtalOcean API v2 client for Go The Databases service provides access to the DigitalOcean managed database suite of products. Customers can create new database clusters, migrate them between regions, create replicas and interact with their configurations. Each database service is refered to as a Database. A SQL database service can have multiple databases residing in the system. To help make these entities distinct from Databases in godo, we refer to them here as DatabaseDBs.
Package godo is the DigtalOcean API v2 client for Go The Databases service provides access to the DigitalOcean managed database suite of products. Customers can create new database clusters, migrate them between regions, create replicas and interact with their configurations. Each database service is refered to as a Database. A SQL database service can have multiple databases residing in the system. To help make these entities distinct from Databases in godo, we refer to them here as DatabaseDBs.
This is a go library used to access the OpenFoodFacts.org database for food product, ingredients and nutritional data from within your go application. The main method of using this library is to create a DataOperator and call methods on it. Create a Client to retrieve and modify database items. Client interacts with the official HTTP API from openfoodfacts.org.
Package main is a stub for wr's command line interface, with the actual implementation in the cmd package. wr is a workflow runner. You use it to run the commands in your workflow easily, automatically, reliably, with repeatability, and while making optimal use of your available computing resources. wr is implemented as a polling-free in-memory job queue with an on-disk acid transactional embedded database, written in go. Its main benefits over other software workflow management systems are its very low latency and overhead, its high performance at scale, its real-time status updates with a view on all your workflows on one screen, its permanent searchable history of all the commands you have ever run, and its "live" dependencies enabling easy automation of on-going projects. Start up the manager daemon, which gives you a url you can view the web interface on: In addition to the "local" scheduler, which will run your commands on all available cores of the local machine, you can also have it run your commands on your LSF cluster or in your OpenStack environment (where it will scale the number of servers needed up and down automatically). Now, stick the commands you want to run in a text file and: Arbitrarily complex workflows can be formed by specifying command dependencies. Use the --help option of `wr add` for details. wr's core is implemented in the queue package. This is the in-memory job queue that holds commands that still need to be run. Its multiple sub-queues enable certain guarantees: a given command will only get run by a single client at any one time; if a client dies, the command will get run by another client instead; if a command cannot be run, it is buried until the user takes action; if a command has a dependency, it won't run until its dependencies are complete. The jobqueue package provides client+server code for interacting with the in-memory queue from the queue package, and by storing all new commands in an on-disk database, provides an additional guarantee: that (dynamic) workflows won't break because a job that was added got "lost" before it got run. It also retains all completed jobs, enabling searching through of past workflows and allowing for "live" dependencies, triggering the rerunning of previously completed commands if their dependencies change. The jobqueue package is also what actually does the main "work" of the system: the server component knows how many commands need to be run and what their resource requirements (memory, time, cpus etc.) are, and submits the appropriate number of jobqueue runner clients to the job scheduler. The jobqueue/scheduler package has the scheduler-specific code that ensures that these runner clients get run on the configured system in the most efficient way possible. Eg. for LSF, if we have 10 commands that need 2GB of memory to run, we will submit a job array of size 10 with 2GB of memory reservation to LSF. The most limited (and therefore potentially least contended) queue capable of running the commands will be chosen. For OpenStack, the cheapest server (in terms of cores and memory) that can run the commands will be spawned, and once there is no more work to do on those servers, they get terminated to free up resources. The cloud package implements methods for interacting with cloud environments such as OpenStack. The corresponding jobqueue/scheduler package uses these methods to do their work. The static subdirectory contains the html, css and javascript needed for the web interface. See jobqueue/serverWebI.go for how the web interface backend is implemented. The internal package contains general utility functions, and most notably config.go holds the code for how the command line interface deals with config options.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package sessions provides sessions support for net/http unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Command mox is a modern, secure, full-featured, open source mail server for low-maintenance self-hosted email. Mox is started with the "serve" subcommand, but mox also has many other subcommands. Many of those commands talk to a running mox instance, through the ctl file in the data directory. Specify the configuration file (that holds the path to the data directory) through the -config flag or MOXCONF environment variable. Commands that don't talk to a running mox instance are often for testing/debugging email functionality. For example for parsing an email message, or looking up SPF/DKIM/DMARC records. Below is the usage information as printed by the command when started without any parameters. Followed by the help and usage information for each command. Start mox, serving SMTP/IMAP/HTTPS. Incoming email is accepted over SMTP. Email can be retrieved by users using IMAP. HTTP listeners are started for the admin/account web interfaces, and for automated TLS configuration. Missing essential TLS certificates are immediately requested, other TLS certificates are requested on demand. Only implemented on unix systems, not Windows. Quickstart generates configuration files and prints instructions to quickly set up a mox instance. Quickstart writes configuration files, prints initial admin and account passwords, DNS records you should create. If you run it on Linux it writes a systemd service file and prints commands to enable and start mox as service. All output is written to quickstart.log for later reference. The user or uid is optional, defaults to "mox", and is the user or uid/gid mox will run as after initialization. Quickstart assumes mox will run on the machine you run quickstart on and uses its host name and public IPs. On many systems the hostname is not a fully qualified domain name, but only the first dns "label", e.g. "mail" in case of "mail.example.org". If so, quickstart does a reverse DNS lookup to find the hostname, and as fallback uses the label plus the domain of the email address you specified. Use flag -hostname to explicitly specify the hostname mox will run on. Mox is by far easiest to operate if you let it listen on port 443 (HTTPS) and 80 (HTTP). TLS will be fully automatic with ACME with Let's Encrypt. You can run mox along with an existing webserver, but because of MTA-STS and autoconfig, you'll need to forward HTTPS traffic for two domains to mox. Run "mox quickstart -existing-webserver ..." to generate configuration files and instructions for configuring mox along with an existing webserver. But please first consider configuring mox on port 443. It can itself serve domains with HTTP/HTTPS, including with automatic TLS with ACME, is easily configured through both configuration files and admin web interface, and can act as a reverse proxy (and static file server for that matter), so you can forward traffic to your existing backend applications. Look for "WebHandlers:" in the output of "mox config describe-domains" and see the output of "mox config example webhandlers". Shut mox down, giving connections maximum 3 seconds to stop before closing them. While shutting down, new IMAP and SMTP connections will get a status response indicating temporary unavailability. Existing connections will get a 3 second period to finish their transaction and shut down. Under normal circumstances, only IMAP has long-living connections, with the IDLE command to get notified of new mail deliveries. Set new password an account. The password is read from stdin. Secrets derived from the password, but not the password itself, are stored in the account database. The stored secrets are for authentication with: scram-sha-256, scram-sha-1, cram-md5, plain text (bcrypt hash). The parameter is an account name, as configured under Accounts in domains.conf and as present in the data/accounts/ directory, not a configured email address for an account. Set a new admin password, for the web interface. The password is read from stdin. Its bcrypt hash is stored in a file named "adminpasswd" in the configuration directory. Print the log levels, or set a new default log level, or a level for the given package. By default, a single log level applies to all logging in mox. But for each "pkg", an overriding log level can be configured. Examples of packages: smtpserver, smtpclient, queue, imapserver, spf, dkim, dmarc, junk, message, etc. Specify a pkg and an empty level to clear the configured level for a package. Valid labels: error, info, debug, trace, traceauth, tracedata. List hold rules for the delivery queue. Messages submitted to the queue that match a hold rule will be marked as on hold and not scheduled for delivery. Add hold rule for the delivery queue. Add a hold rule to mark matching newly submitted messages as on hold. Set the matching rules with the flags. Don't specify any flags to match all submitted messages. Remove hold rule for the delivery queue. Remove a hold rule by its id. List matching messages in the delivery queue. Prints the message with its ID, last and next delivery attempts, last error. Mark matching messages on hold. Messages that are on hold are not delivered until marked as off hold again, or otherwise handled by the admin. Mark matching messages off hold. Once off hold, messages can be delivered according to their current next delivery attempt. See the "queue schedule" command. Change next delivery attempt for matching messages. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Set transport for matching messages. By default, the routing rules determine how a message is delivered. The default and common case is direct delivery with SMTP. Messages can get a previously configured transport assigned to use for delivery, e.g. using submission to another mail server or with connections over a SOCKS proxy. Set TLS requirements for delivery of matching messages. Value "yes" is handled as if the RequireTLS extension was specified during submission. Value "no" is handled as if the message has a header "TLS-Required: No". This header is not added by the queue. If messages without this header are relayed through other mail servers they will apply their own default TLS policy. Value "default" is the default behaviour, currently for unverified opportunistic TLS. Fail delivery of matching messages, delivering DSNs. Failing a message is handled similar to how delivery is given up after all delivery attempts failed. The DSN (delivery status notification) message contains a line saying the message was canceled by the admin. Remove matching messages from the queue. Dangerous operation, this completely removes the message. If you want to store the message, use "queue dump" before removing. Dump a message from the queue. The message is printed to stdout and is in standard internet mail format. List matching messages in the retired queue. Prints messages with their ID and results. Print a message from the retired queue. Prints a JSON representation of the information from the retired queue. Print addresses in suppression list. Add address to suppression list for account. Remove address from suppression list for account. Check if address is present in suppression list, for any or specific account. List matching webhooks in the queue. Prints list of webhooks, their IDs and basic information. Change next delivery attempt for matching webhooks. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Fail delivery of matching webhooks. Print details of a webhook from the queue. The webhook is printed to stdout as JSON. List matching webhooks in the retired queue. Prints list of retired webhooks, their IDs and basic information. Print details of a webhook from the retired queue. The retired webhook is printed to stdout as JSON. Import a maildir into an account. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Mailbox flags, like "seen", "answered", will be imported. An optional dovecot-keywords file can specify additional flags, like Forwarded/Junk/NotJunk. Import an mbox into an account. Using mbox is not recommended, maildir is a better defined format. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Export one or all mailboxes from an account in maildir format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. Export messages from one or all mailboxes in an account in mbox format. Using mbox is not recommended. Maildir is a better format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. For mbox export, "mboxrd" is used where message lines starting with the magic "From " string are escaped by prepending a >. All ">*From " are escaped, otherwise reconstructing the original could lose a ">". Start a local SMTP/IMAP server that accepts all messages, useful when testing/developing software that sends email. Localserve starts mox with a configuration suitable for local email-related software development/testing. It listens for SMTP/Submission(s), IMAP(s) and HTTP(s), on the regular port numbers + 1000. Data is stored in the system user's configuration directory under "mox-localserve", e.g. $HOME/.config/mox-localserve/ on linux, but can be overridden with the -dir flag. If the directory does not yet exist, it is automatically initialized with configuration files, an account with email address mox@localhost and password moxmoxmox, and a newly generated self-signed TLS certificate. Incoming messages are delivered as normal, falling back to accepting and delivering to the mox account for unknown addresses. Submitted messages are added to the queue, which delivers by ignoring the destination servers, always connecting to itself instead. Recipient addresses with the following localpart suffixes are handled specially: - "temperror": fail with a temporary error code - "permerror": fail with a permanent error code - [45][0-9][0-9]: fail with the specific error code - "timeout": no response (for an hour) If the localpart begins with "mailfrom" or "rcptto", the error is returned during those commands instead of during "data". Prints help about matching commands. If multiple commands match, they are listed along with the first line of their help text. If a single command matches, its usage and full help text is printed. Creates a backup of the config and data directory. Backup copies the config directory to <destdir>/config, and creates <destdir>/data with a consistent snapshot of the databases and message files and copies other files from the data directory. Empty directories are not copied. The backup can then be stored elsewhere for long-term storage, or used to fall back to should an upgrade fail. Simply copying files in the data directory while mox is running can result in unusable database files. Message files never change (they are read-only, though can be removed) and are hard-linked so they don't consume additional space. If hardlinking fails, for example when the backup destination directory is on a different file system, a regular copy is made. Using a destination directory like "data/tmp/backup" increases the odds hardlinking succeeds: the default systemd service file specifically mounts the data directory, causing attempts to hardlink outside it to fail with an error about cross-device linking. All files in the data directory that aren't recognized (i.e. other than known database files, message files, an acme directory, the "tmp" directory, etc), are stored, but with a warning. Remove files in the destination directory before doing another backup. The backup command will not overwrite files, but print and return errors. Exit code 0 indicates the backup was successful. A clean successful backup does not print any output, but may print warnings. Use the -verbose flag for details, including timing. To restore a backup, first shut down mox, move away the old data directory and move an earlier backed up directory in its place, run "mox verifydata <datadir>", possibly with the "-fix" option, and restart mox. After the restore, you may also want to run "mox bumpuidvalidity" for each account for which messages in a mailbox changed, to force IMAP clients to synchronize mailbox state. Before upgrading, to check if the upgrade will likely succeed, first make a backup, then use the new mox binary to run "mox verifydata <backupdir>/data". This can change the backup files (e.g. upgrade database files, move away unrecognized message files), so you should make a new backup before actually upgrading. Verify the contents of a data directory, typically of a backup. Verifydata checks all database files to see if they are valid BoltDB/bstore databases. It checks that all messages in the database have a corresponding on-disk message file and there are no unrecognized files. If option -fix is specified, unrecognized message files are moved away. This may be needed after a restore, because messages enqueued or delivered in the future may get those message sequence numbers assigned and writing the message file would fail. Consistency of message/mailbox UID, UIDNEXT and UIDVALIDITY is verified as well. Because verifydata opens the database files, schema upgrades may automatically be applied. This can happen if you use a new mox release. It is useful to run "mox verifydata" with a new binary before attempting an upgrade, but only on a copy of the database files, as made with "mox backup". Before upgrading, make a new backup again since "mox verifydata" may have upgraded the database files, possibly making them potentially no longer readable by the previous version. Print licenses of mox source code and dependencies. Parses and validates the configuration files. If valid, the command exits with status 0. If not valid, all errors encountered are printed. Check the DNS records with the configuration for the domain, and print any errors/warnings. Prints annotated DNS records as zone file that should be created for the domain. The zone file can be imported into existing DNS software. You should review the DNS records, especially if your domain previously/currently has email configured. Prints an annotated empty configuration for use as domains.conf. The domains configuration file contains the domains and their configuration, and accounts and their configuration. This includes the configured email addresses. The mox admin web interface, and the mox command line interface, can make changes to this file. Mox automatically reloads this file when it changes. Like the static configuration, the example domains.conf printed by this command needs modifications to make it valid. Prints an annotated empty configuration for use as mox.conf. The static configuration file cannot be reloaded while mox is running. Mox has to be restarted for changes to the static configuration file to take effect. This configuration file needs modifications to make it valid. For example, it may contain unfinished list items. List all accounts. Each account is printed on a line, with optional additional tab-separated information, such as "(disabled)". Add an account with an email address and reload the configuration. Email can be delivered to this address/account. A password has to be configured explicitly, see the setaccountpassword command. Remove an account and reload the configuration. Email addresses for this account will also be removed, and incoming email for these addresses will be rejected. All data for the account will be removed. Disable login for an account, showing message to users when they try to login. Incoming email will still be accepted for the account, and queued email from the account will still be delivered. No new login sessions are possible. Message must be non-empty, ascii-only without control characters including newline, and maximum 256 characters because it is used in SMTP/IMAP. Enable login again for an account. Login attempts by the user no long result in an error message. Adds an address to an account and reloads the configuration. If address starts with a @ (i.e. a missing localpart), this is a catchall address for the domain. Remove an address and reload the configuration. Incoming email for this address will be rejected after removing an address. Adds a new domain to the configuration and reloads the configuration. The account is used for the postmaster mailboxes the domain, including as DMARC and TLS reporting. Localpart is the "username" at the domain for this account. If must be set if and only if account does not yet exist. The domain can be created in disabled mode, preventing automatically requesting TLS certificates with ACME, and rejecting incoming/outgoing messages involving the domain, but allowing further configuration of the domain. Remove a domain from the configuration and reload the configuration. This is a dangerous operation. Incoming email delivery for this domain will be rejected. Disable a domain and reload the configuration. This is a dangerous operation. Incoming/outgoing messages involving this domain will be rejected. Enable a domain and reload the configuration. Incoming/outgoing messages involving this domain will be accepted again. List TLS public keys for TLS client certificate authentication. If account is absent, the TLS public keys for all accounts are listed. Get a TLS public key for a fingerprint. Prints the type, name, account and address for the key, and the certificate in PEM format. Add a TLS public key to the account of the given address. The public key is read from the certificate. The optional name is a human-readable descriptive name of the key. If absent, the CommonName from the certificate is used. Remove TLS public key for fingerprint. Generate an ed25519 private key and minimal certificate for use a TLS public key and write to files starting with stem. The private key is written to $stem.$timestamp.ed25519privatekey.pkcs8.pem. The certificate is written to $stem.$timestamp.certificate.pem. The private key and certificate are also written to $stem.$timestamp.ed25519privatekey-certificate.pem. The certificate can be added to an account with "mox config account tlspubkey add". The combined file can be used with "mox sendmail". The private key is also written to standard error in raw-url-base64-encoded form, also for use with "mox sendmail". The fingerprint is written to standard error too, for reference. Show aliases (lists) for domain. Print settings and members of alias (list). Add new alias (list) with one or more addresses and public posting enabled. An alias is used for delivering incoming email to multiple recipients. If you want to add an address to an account, don't use an alias, just add the address to the account. Update alias (list) configuration. Remove alias (list). Add addresses to alias (list). Remove addresses from alias (list). Describe configuration for mox when invoked as sendmail. Prints a systemd unit service file for mox. This is the same file as generated using quickstart. If the systemd service file has changed with a newer version of mox, use this command to generate an up to date version. Ensure host private keys exist for TLS listeners with ACME. In mox.conf, each listener can have TLS configured. Long-lived private key files can be specified, which will be used when requesting ACME certificates. Configuring these private keys makes it feasible to publish DANE TLSA records for the corresponding public keys in DNS, protected with DNSSEC, allowing TLS certificate verification without depending on a list of Certificate Authorities (CAs). Previous versions of mox did not pre-generate private keys for use with ACME certificates, but would generate private keys on-demand. By explicitly configuring private keys, they will not change automatedly with new certificates, and the DNS TLSA records stay valid. This command looks for listeners in mox.conf with TLS with ACME configured. For each missing host private key (of type rsa-2048 and ecdsa-p256) a key is written to config/hostkeys/. If a certificate exists in the ACME "cache", its private key is copied. Otherwise a new private key is generated. Snippets for manually updating/editing mox.conf are printed. After running this command, and updating mox.conf, run "mox config dnsrecords" for a domain and create the TLSA DNS records it suggests to enable DANE. List available config examples, or print a specific example. Initiate a preauthenticated IMAP connection on file descriptor 0. For use with tools that can do IMAP over tunneled connections, e.g. with SSH during migrations. TLS is not possible on the connection, and authentication does not require TLS. Check if a newer version of mox is available. A single DNS TXT lookup to _updates.xmox.nl tells if a new version is available. If so, a changelog is fetched from https://updates.xmox.nl, and the individual entries verified with a builtin public key. The changelog is printed. Turn an ID from a Received header into a cid, for looking up in logs. A cid is essentially a connection counter initialized when mox starts. Each log line contains a cid. Received headers added by mox contain a unique ID that can be decrypted to a cid by admin of a mox instance only. Print the configuration for email clients for a domain. Sending email is typically not done on the SMTP port 25, but on submission ports 465 (with TLS) and 587 (without initial TLS, but usually added to the connection with STARTTLS). For IMAP, the port with TLS is 993 and without is 143. Without TLS/STARTTLS, passwords are sent in clear text, which should only be configured over otherwise secured connections, like a VPN. Dial the address using TLS with certificate verification using DANE. Data is copied between connection and stdin/stdout until either side closes the connection. Connect to MX server for domain using STARTTLS verified with DANE. If no destination host is specified, regular delivery logic is used to find the hosts to attempt delivery too. This involves following CNAMEs for the domain, looking up MX records, and possibly falling back to the domain name itself as host. If a destination host is specified, that is the only candidate host considered for dialing. With a list of destinations gathered, each is dialed until a successful SMTP session verified with DANE has been initialized, including EHLO and STARTTLS commands. Once connected, data is copied between connection and stdin/stdout, until either side closes the connection. This command follows the same logic as delivery attempts made from the queue, sharing most of its code. Print TLSA record for given certificate/key and parameters. Valid values: - usage: pkix-ta (0), pkix-ee (1), dane-ta (2), dane-ee (3) - selector: cert (0), spki (1) - matchtype: full (0), sha2-256 (1), sha2-512 (2) Common DANE TLSA record parameters are: dane-ee spki sha2-256, or 3 1 1, followed by a sha2-256 hash of the DER-encoded "SPKI" (subject public key info) from the certificate. An example DNS zone file entry: The first usable information from the pem file is used to compose the TLSA record. In case of selector "cert", a certificate is required. Otherwise the "subject public key info" (spki) of the first certificate or public or private key (pkcs#8, pkcs#1 or ec private key) is used. Lookup DNS name of given type. Lookup always prints whether the response was DNSSEC-protected. Examples: mox dns lookup ptr 1.1.1.1 mox dns lookup mx xmox.nl mox dns lookup txt _dmarc.xmox.nl. mox dns lookup tlsa _25._tcp.xmox.nl Generate a new ed25519 key for use with DKIM. Ed25519 keys are much smaller than RSA keys of comparable cryptographic strength. This is convenient because of maximum DNS message sizes. At the time of writing, not many mail servers appear to support ed25519 DKIM keys though, so it is recommended to sign messages with both RSA and ed25519 keys. Generate a new 2048 bit RSA private key for use with DKIM. The generated file is in PEM format, and has a comment it is generated for use with DKIM, by mox. Lookup and print the DKIM record for the selector at the domain. Print a DKIM DNS TXT record with the public key derived from the private key read from stdin. The DNS should be configured as a TXT record at $selector._domainkey.$domain. Verify the DKIM signatures in a message and print the results. The message is parsed, and the DKIM-Signature headers are validated. Validation of older messages may fail because the DNS records have been removed or changed by now, or because the signature header may have specified an expiration time that was passed. Sign a message, adding DKIM-Signature headers based on the domain in the From header. The message is parsed, the domain looked up in the configuration files, and DKIM-Signature headers generated. The message is printed with the DKIM-Signature headers prepended. Lookup dmarc policy for domain, a DNS TXT record at _dmarc.<domain>, validate and print it. Parse a DMARC report from an email message, and print its extracted details. DMARC reports are periodically mailed, if requested in the DMARC DNS record of a domain. Reports are sent by mail servers that received messages with our domain in a From header. This may or may not be legatimate email. DMARC reports contain summaries of evaluations of DMARC and DKIM/SPF, which can help understand email deliverability problems. Parse an email message and evaluate it against the DMARC policy of the domain in the From-header. mailfromaddress and helodomain are used for SPF validation. If both are empty, SPF validation is skipped. mailfromaddress should be the address used as MAIL FROM in the SMTP session. For DSN messages, that address may be empty. The helo domain was specified at the beginning of the SMTP transaction that delivered the message. These values can be found in message headers. For each reporting address in the domain's DMARC record, check if it has opted into receiving reports (if needed). A DMARC record can request reports about DMARC evaluations to be sent to an email/http address. If the organizational domains of that of the DMARC record and that of the report destination address do not match, the destination address must opt-in to receiving DMARC reports by creating a DMARC record at <dmarcdomain>._report._dmarc.<reportdestdomain>. Test if IP is in the DNS blocklist of the zone, e.g. bl.spamcop.net. If the IP is in the blocklist, an explanation is printed. This is typically a URL with more information. Check the health of the DNS blocklist represented by zone, e.g. bl.spamcop.net. The health of a DNS blocklist can be checked by querying for 127.0.0.1 and 127.0.0.2. The second must and the first must not be present. Lookup the MTASTS record and policy for the domain. MTA-STS is a mechanism for a domain to specify if it requires TLS connections for delivering email. If a domain has a valid MTA-STS DNS TXT record at _mta-sts.<domain> it signals it implements MTA-STS. A policy can then be fetched at https://mta-sts.<domain>/.well-known/mta-sts.txt. The policy specifies the mode (enforce, testing, none), which MX servers support TLS and should be used, and how long the policy can be cached. Lookup the age of domain in RDAP based on latest registration. RDAP is the registration data access protocol. Registries run RDAP services for their top level domains, providing information such as the registration date of domains. This command looks up the "age" of a domain by looking at the most recent "registration", "reregistration" or "reinstantiation" event. Email messages from recently registered domains are often treated with suspicion, and some mail systems are more likely to classify them as junk. On each invocation, a bootstrap file with a list of registries (of top-level domains) is retrieved, without caching. Do not run this command too often with automation. Recreate and retrain the junk filter for the account or all accounts. Useful after having made changes to the junk filter configuration, or if the implementation has changed. Sendmail is a drop-in replacement for /usr/sbin/sendmail to deliver emails sent by unix processes like cron. If invoked as "sendmail", it will act as sendmail for sending messages. Its intention is to let processes like cron send emails. Messages are submitted to an actual mail server over SMTP. The destination mail server and credentials are configured in /etc/moxsubmit.conf, see mox config describe-sendmail. The From message header is rewritten to the configured address. When the addressee appears to be a local user, because without @, the message is sent to the configured default address. If submitting an email fails, it is added to a directory moxsubmit.failures in the user's home directory. Most flags are ignored to fake compatibility with other sendmail implementations. A single recipient or the -t flag with a To-header is required. With the -t flag, Cc and Bcc headers are not handled specially, so Bcc is not removed and the addresses do not receive the email. /etc/moxsubmit.conf should be group-readable and not readable by others and this binary should be setgid that group: Check the status of IP for the policy published in DNS for the domain. IPs may be allowed to send for a domain, or disallowed, and several shades in between. If not allowed, an explanation may be provided by the policy. If so, the explanation is printed. The SPF mechanism that matched (if any) is also printed. Lookup the SPF record for the domain and print it. Parse the record as SPF record. If valid, nothing is printed. Lookup the TLSRPT record for the domain. A TLSRPT record typically contains an email address where reports about TLS connectivity should be sent. Mail servers attempting delivery to our domain should attempt to use TLS. TLSRPT lets them report how many connection successfully used TLS, and how what kind of errors occurred otherwise. Parse and print the TLSRPT in the message. The report is printed in formatted JSON. Prints this mox version. Lists available methods, prints request/response parameters for method, or calls a method with a request read from standard input. List available examples, or print a specific example. Change the IMAP UID validity of the mailbox, causing IMAP clients to refetch messages. This can be useful after manually repairing metadata about the account/mailbox. Opens account database file directly. Ensure mox does not have the account open, or is not running. Reassign UIDs in one mailbox or all mailboxes in an account and bump UID validity, causing IMAP clients to refetch messages. Opens account database file directly. Ensure mox does not have the account open, or is not running. Fix inconsistent UIDVALIDITY and UIDNEXT in messages/mailboxes/account. The next UID to use for a message in a mailbox should always be higher than any existing message UID in the mailbox. If it is not, the mailbox UIDNEXT is updated. Each mailbox has a UIDVALIDITY sequence number, which should always be lower than the per-account next UIDVALIDITY to use. If it is not, the account next UIDVALIDITY is updated. Opens account database file directly. Ensure mox does not have the account open, or is not running. Ensure message sizes in the database matching the sum of the message prefix length and on-disk file size. Messages with an inconsistent size are also parsed again. If an inconsistency is found, you should probably also run "mox bumpuidvalidity" on the mailboxes or entire account to force IMAP clients to refetch messages. Parse all messages in the account or all accounts again. Can be useful after upgrading mox with improved message parsing. Messages are parsed in batches, so other access to the mailboxes/messages are not blocked while reparsing all messages. Ensure messages in the database have a pre-parsed MIME form in the database. Recalculate message counts for all mailboxes in the account, and total message size for quota. When a message is added to/removed from a mailbox, or when message flags change, the total, unread, unseen and deleted messages are accounted, the total size of the mailbox, and the total message size for the account. In case of a bug in this accounting, the numbers could become incorrect. This command will find, fix and print them. Parse message, print JSON representation. Reassign message threads. For all accounts, or optionally only the specified account. Threading for all messages in an account is first reset, and new base subject and normalized message-id saved with the message. Then all messages are evaluated and matched against their parents/ancestors. Messages are matched based on the References header, with a fall-back to an In-Reply-To header, and if neither is present/valid, based only on base subject. A References header typically points to multiple previous messages in a hierarchy. From oldest ancestor to most recent parent. An In-Reply-To header would have only a message-id of the parent message. A message is only linked to a parent/ancestor if their base subject is the same. This ensures unrelated replies, with a new subject, are placed in their own thread. The base subject is lower cased, has whitespace collapsed to a single space, and some components removed: leading "Re:", "Fwd:", "Fw:", or bracketed tag (that mailing lists often add, e.g. "[listname]"), trailing "(fwd)", or enclosing "[fwd: ...]". Messages are linked to all their ancestors. If an intermediate parent/ancestor message is deleted in the future, the message can still be linked to the earlier ancestors. If the direct parent already wasn't available while matching, this is stored as the message having a "missing link" to its stored ancestors.
Package tailetc implements an total-memory-cache etcd v3 client implemented by tailing (watching) an entire etcd. The client maintains a complete copy of the etcd database in memory, with values decoded into Go objects. It presents a simplified transaction model that assumes that multiple writers will not be contending on keys. When you call tx.Get or tx.Put, your Tx records the current etcd global revision. When you tx.Commit, if some newer revision of any key touched by Tx is in etcd then the commit will fail. Contention failures are reported as ErrTxStale from tx.Commit. Failures may be reported sooner as tx.Get or tx.Put errors, but the tx error is sticky, that is, if you ignore those errors the eventual error from tx.Commit will have ErrTxStale in its error chain. The Tx.Commit method waits before successfully returning until DB has caught up with the etcd global revision of the transaction. This ensures that sequential happen in the strict database sequence. So if you serve a REST API from a single *etcd.DB instance, then the API will behave as users expect it to. If you know only one thing about this etcd client, know this: Everything else should follow a programmer's intuition for an in-memory map guarded by a RWMutex. We take advantage of etcd's watch model to maintain a *decoded* value cache of the entire database in memory. This means that querying a value is extremely cheap. Issuing tx.Get(key) involves no more work than holding a mutex read lock, reading from a map, and cloning the value. The etcd watch is then exposed to the user of *etcd.DB via the WatchFunc. This lets users maintain in-memory higher-level indexes into the etcd database that respond to external commits. WatchFunc is called while the database lock is held, so a WatchFunc implementation cannot synchronously issue transactions. The cache in etcd.DB is very careful not to copy objects both into and out of the cache so that users of the etcd.DB cannot get pointers directly into the memory inside the cache. This means over-copying. Users can control precisely how much copying is done, and how, by providing a CloneFunc implementation. The general ownership semantics are: objects are copied as soon as they are passed to etcd, and any memory returned by etcd is owned by the caller.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set LogLevel to control logging verbosity and LogPath to specify log file path. In order to enable logging for the driver, you need to write below code in your application Declaring elog variable and calling elog.Initialize() function is mandatory else application would fail with error "runtime error: invalid memory address or nil pointer dereference". You can configure LogLevel and LogPath (i.e. log file directory) as per your requirement. You may skip initializing LogLevel and LogPath values. In such case, it would take default values. Default value for LogLevel is DEBUG while for LogPath is same directory as your application. Other valid values for 'LogLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL" The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): The above example opens a database handle on NPS server 'vmnps-dw10.svl.ibm.com'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package resp3 implements redis RESP3 protocol, which is used from redis 6.0. RESP (REdis Serialization Protocol) is the protocol used in the Redis database, however the protocol is designed to be used by other projects. With the version 3 of the protocol, currently a work in progress design, the protocol aims to become even more generally useful to other systems that want to implement a protocol which is simple, efficient, and with a very large landscape of client libraries implementations. That means you can use this library to access other RESP3 projects. This library contains three important components: Value, Reader and Writer. Value represents a redis command or a redis response. It is a common struct for all RESP3 types. Reader can parse redis responses from redis servers or commands from redis clients. You can use it to implement redis 6.0 clients, no need to pay attention to underlying parsing. Those new features of redis 6.0 can be implemented based on it. Writer is redis writer. You can use it to send commands to redis servers. RESP3 spec can be found at https://github.com/antirez/RESP3. A redis client based on it is just as the below:
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Copyright 2023 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package webrisk implements a client for the Web Risk API v4. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Web Risk API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures.
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package lldb implements a low level database engine. The database model used could be considered a specific implementation of some small(est) intersection of models listed in [1]. As a settled term is lacking, it'll be called here a 'Virtual memory model' (VMM). 2016-07-24: v1.0.4 brings some performance improvements. 2016-07-22: v1.0.3 brings some small performance improvements. 2016-07-12: v1.0.2 now uses packages from cznic/internal. 2016-07-12: v1.0.1 adds a license for testdata/fortunes.txt. 2016-07-11: First standalone release v1.0.0 of the package previously published as experimental (github.com/cznic/exp/lldb). A Filer is an abstraction of storage. A Filer may be a part of some process' virtual address space, an OS file, a networked, remote file etc. Persistence of the storage is optional, opaque to VMM and it is specific to a concrete Filer implementation. Mechanism to allocate, reallocate (resize), deallocate (and later reclaim the unused) contiguous parts of a Filer, called blocks. Blocks are identified and referred to by a handle, an int64. In addition to the VMM like services, lldb provides volatile and non-volatile BTrees. Keys and values of a BTree are limited in size to 64kB each (a bit more actually). Support for larger keys/values, if desired, can be built atop a BTree to certain limits. A handle is the abstracted storage counterpart of a memory address. There is one fundamental difference, though. Resizing a block never results in a change to the handle which refers to the resized block, so a handle is more akin to an unique numeric id/key. Yet it shares one property of pointers - handles can be associated again with blocks after the original handle block was deallocated. In other words, a handle uniqueness domain is the state of the database and is not something comparable to e.g. an ever growing numbering sequence. Also, as with memory pointers, dangling handles can be created and blocks overwritten when such handles are used. Using a zero handle to refer to a block will not panic; however, the resulting error is effectively the same exceptional situation as dereferencing a nil pointer. Allocated/used blocks, are limited in size to only a little bit more than 64kB. Bigger semantic entities/structures must be built in lldb's client code. The content of a block has no semantics attached, it's only a fully opaque `[]byte`. Use of "scalars" applies to EncodeScalars, DecodeScalars and Collate. Those first two "to bytes" and "from bytes" functions are suggested for handling multi-valued Allocator content items and/or keys/values of BTrees (using Collate for keys). Types called "scalar" are: Included are concrete implementations of some of the VMM interfaces included to ease serving simple client code or for testing and possibly as an example. More details in the documentation of such implementations.