Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Package tarmac is a client package for WASM functions running within a Tarmac server. This package provides user-friendly functions that wrap the Web Assembly Procedure Call (waPC) based functions of Tarmac. Guest WASM functions running inside Tarmac can use this library to call back the Tarmac host and perform host-level actions such as storing data within the database, logging specific data, or looking up configurations.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/flynn/pq/listen_example.
Tlogdb is a trivial transparent log client and server. It is meant as more a starting point to be customized than a tool to be used directly. A transparent log is a tamper-proof, append-only, immutable log of data records. That is, if the server were to violate the “append-only, immutable” properties, that tampering would be detected by the client. For more about transparent logs, see https://research.swtch.com/tlog. To create a new log (new server state): The newlog command creates a new database in file (default tlog.db) containing an empty log and a newly generated public/private key pair for the server using the given name. The newlog command prints the newly generated public key. To see it again: To add a record named name to the log: To serve the authenticated log data: The default server address is localhost:6655. The client maintains a cache database both for performance (avoiding duplicate downloads) and for storing the server's public key and the most recently seen log head. To create a new client cache: The newcache command creates a new database in file (default tlogclient.db) and stores the given public key for later use. The key should be the output of the tlogdb's server commands newlog or publickey, described above. To look up a record in the log: The default server address is again localhost:6655. The protocol between client and server is the same as used in the Go module checksum database, documented at https://golang.org/design/25530-sumdb#checksum-database. There are three endpoints: /latest serves a signed tree head; /lookup/NAME looks up the given name, and /tile/* serves log tiles. Putting the various commands together in a Unix shell:
Tlogdb is a trivial transparent log client and server. It is meant as more a starting point to be customized than a tool to be used directly. A transparent log is a tamper-proof, append-only, immutable log of data records. That is, if the server were to violate the “append-only, immutable” properties, that tampering would be detected by the client. For more about transparent logs, see https://research.swtch.com/tlog. To create a new log (new server state): The newlog command creates a new database in file (default tlog.db) containing an empty log and a newly generated public/private key pair for the server using the given name. The newlog command prints the newly generated public key. To see it again: To add a record named name to the log: To serve the authenticated log data: The default server address is localhost:6655. The client maintains a cache database both for performance (avoiding duplicate downloads) and for storing the server's public key and the most recently seen log head. To create a new client cache: The newcache command creates a new database in file (default tlogclient.db) and stores the given public key for later use. The key should be the output of the tlogdb's server commands newlog or publickey, described above. To look up a record in the log: The default server address is again localhost:6655. The protocol between client and server is the same as used in the Go module checksum database, documented at https://golang.org/design/25530-sumdb#checksum-database. There are three endpoints: /latest serves a signed tree head; /lookup/NAME looks up the given name, and /tile/* serves log tiles. Putting the various commands together in a Unix shell:
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. The API features several modes of authentication, including HTTP Basic for login and HMAC-SHA1 other calls. Upon logging into the API using HTTP Basic with a username and password pair, an API public key and secret will be generated. The public key is used as the username for HTTP Basic authentication, and the secret key is used to calculate a HMAC-SHA1 signature for the password. As part of API signature generation, a random nonce value must be generated and added to the request. It is added to the password portion of the HTTP Basic request, and also to the string which is used to create the signature. Nonce values must be changed on every request, or the request will fail. The current pseudocode format of the HMAC-SHA1 signature is as follows: The proper format for a HTTP Basic request is as follows: When the public key, nonce, and API signature are sent via HTTP Basic, the server will verify the signature. Successful authentication will allow access to the API. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Request an API public key and secret key for this user. The public key, user ID, and secret key are used to authenticate further API calls. The expire time indicates when this key is set to expire. Further API calls will extend the expiration time. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. Create a user with the specified username, password, and torrent limit. Reterieve a list of all users registered to goat, including their ID, torrent limit, and username. Retrieve information about a single user with matching ID, including their ID, torrent limit, and username. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package goBolt implements drivers for the Neo4J Bolt Protocol Versions 1-4. There are some limitations to the types of collections the internalDriver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/internalDriver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
Package isokey allows you to make and verify API keys without a database connection via HMAC signatures. The keys are scalable and persistent. All information is stored in the key, and with the client.
Package sdk is a client package for WASM functions running within a Tarmac server. This package provides user-friendly functions that wrap the Web Assembly Procedure Call (waPC) based functions of Tarmac. Guest WASM functions running inside Tarmac can use this library to call back the Tarmac host and perform host-level actions such as storing data within the database, logging specific data, or looking up configurations.
Package radix implements an asynchronous Redis client. Client is a structure for accessing a Redis database. After establishing a connection with NewClient, commands can be executed with Client.Command. Client.Command returns a Reply with different methods for accessing the retrieved values. Client.MultiCommand can be used for sending multiple commands in a single request and Client.Transaction offers a simple way for executing atomic requests. Client.Subscription returns a Subscription that can be used for listening published messages.
Package pgconn is a low-level PostgreSQL database driver. pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at nearly the same level is the C library libpq. Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for libpq style environment variables. ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method reads all rows into memory. Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query result. The ReadAll method reads all query results into memory. All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the method immediately returns. In most circumstances, this will close the underlying connection. The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the client to abort.
Origins is an open source bi-temporal database for storing and retrieving facts about the state of things. It supports "time-travel" queries, aggregate views, and change detection. The primary interface is the CLI which can be installed by running: This package defines some of the primitive structures and algorithms for manipulating, reading, and writing facts and is used to build higher level client APIs. Fact sorting is done using the Timsort algorithm which is hybrid algorithm of merge sort and insertion sort. This is chosen because facts are generally partially sorted by entity since facts are derived from higher level objects. For comparison, comparators for the default Quicksort algorithm are implemented for benchmarking purposes. Wikipedia: https://en.wikipedia.org/wiki/Timsort Comparison to quicksort: http://stackoverflow.com/a/19587279/407954
The Escher HTTP request signing framework is intended to provide a secure way for clients to sign HTTP requests, and servers to check the integrity of these messages. The goal of the protocol is to introduce an authentication solution for REST API services, that is more secure than the currently available protocols. RFC 2617 (HTTP Authentication) defines Basic and Digest Access Authentication. They’re widely used, but Basic Access Authentication doesn’t encrypt the secret and doesn’t add integrity checks to the requests. Digest Access Authentication sends the secret encrypted, but the algorithm with creating a checksum with a nonce and using md5 should not be considered highly secure these days, and as with Basic Access Authentication, there’s no way to check the integrity of the message. RFC 6749 (OAuth 2.0 Authorization) enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This is not helpful for a machine-to-machine communication situation, like a REST API authentication, because typically there’s no third-party user involved. Additionally, after a token is obtained from the authorization endpoint, it is used with no encryption, and doesn’t provide integration checking, or prevent repeating messages. OAuth 2.0 is a stateful protocol which needs a database to store the tokens for client sessions. Amazon and other service providers created protocols addressing these issues, however there is no public standard with open source implementations available from them. As Escher is based on a publicly documented, widely, in-the-wild used protocol, the specification does not include novelty techniques. 2. Signing an HTTP Request Escher defines a stateless signature generation mechanism. The signature is calculated from the key parts of the HTTP request, a service identifier string called credential scope, and a client key and client secret. The signature generation steps are: canonicalizing the request, creating a string to calculate the signature, and adding the signature to the original request. Escher supports two hash algorithms: SHA256 and SHA512 designed by the NSA (U.S. National Security Agency). 2.1. Canonicalizing the Request In order to calculate a checksum from the key HTTP request parts, the HTTP request method, the request URI, the query parts, the headers, and the request body have to be canonicalized. The output of the canonicalization step will be a string including the request parts separated by LF (line feed, “n”) characters. The string will be used to calculate a checksum for the request. 2.1.1. The HTTP method The HTTP method defined by RFC2616 (Hypertext Transfer Protocol) is case sensitive, and must be available in upper case, no transformation has to be applied: POST 2.1.2. The Path The path is the absolute path of the URL. Starts with a slash (/) character, and does not include the query part (and the question mark). Escher follows the rules defined by RFC3986 (Uniform Resource Identifier) to normalize the path. Basically it means: Convert relative paths to absolute, remove redundant path components. URI-encode each path components: the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty paths to /. For example: 2.1.3. The Query String RFC3986 (Uniform Resource Identifier) should provide guidance for canonicalization of the query string, but here’s the complete list of the rules to be applied: URI-encode each query parameter names and values the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty query strings to empty string. Sort query parameters by the encoded parameter names (ASCII order). Do not shorten parameter values if their parameter name is the same (key=B&key=A is a valid output), the order of parameters in a URL may be significant (this is not defined by the HTTP standard). Separate parameter names and values by = signs, include = for empty values, too Separate parameters by & For example: To canonicalize the headers, the following rules have to be followed: Lower case the header names. Separate header names and values by a :, with no spaces. Sort header names to alphabetical order (ASCII). Group headers with the same names into a header, and separate their values by commas, without sorting. Trim header values, keep all the spaces between quote characters ("). For example: 2.1.5. Signed Headers The list of headers to include when calculating the signature. Lower cased value of header names, separated by ;, like this: date;host 2.1.6. Body Checksum A checksum for the request body, aka the payload has to be calculated. Escher supports SHA-256 and SHA-512 algorithms for checksum calculation. If the request contains no body, an empty string has to be used as the input for the hash algorithm. The selected algorithm will be added later to the authorization header, so the server will be able to use the same algorithm for validation. The checksum of the body has to be presented as a lower cased hexadecimal string, for example: 2.1.7. Concatenating the Canonicalized Parts All the steps above produce a row of data, except the headers canonicalization, as it creates one row per headers. These have to be concatenated with LF (line feed, “n”) characters into a string. An example: 2.2. Creating the Signature The next step is creating another string which will be directly used to calculate the signature. 2.2.1. Algorithm ID The algorithm ID comes from the algo_prefix (default value is ESR) and the algorithm used to calculate checksums during the signing process. The string algo_prefix, “HMAC”, and the algorithm name should be concatenated with dashes, like this: 2.2.2. Long Date The long date is the request date in the ISO 8601 basic format, like YYYYMMDD + T + HHMMSS + Z. Note that the basic format uses no punctuation. Example is: This date has to be added later, too, as a date header (default header name is X-Escher-Date). 2.2.3. Date and Credential Scope Next information is the short date, and the credential scope concatenated with a / character. The short date is the request date’s date part, an ISO 8601 basic formatted representation, the credential scope is defined by the service. Example: This will be added later, too, as part of the authorization header (default header name is X-Escher-Auth). 2.2.4. Checksum of the Canonicalized Request Take the output of step 2.1.7., and create a checksum from the canonicalized checksum string. This checksum has to be represented as a lower cased hexadecimal string, too. Something like this will be an output: 2.2.5. Concatenating the Signing String Concatenate the outputs of steps 2.2. with LF characters. Example output: 2.2.6. The Signing Key The signing key is based on the algo_prefix, the client secret, the parts of the credential scope, and the request date. Take the algo_prefix, concatenate the client secret to it. First apply the HMAC algorithm to the request date, then apply the actual value on each of the credential scope parts (splitted at /). The end result is a binary signing key. Pseudo code: 2.2.7. Create the Signature The signature is created from the output of steps 2.2.5. (Signing String) and 2.2.6. (Signing Key). With the selected algorithm, create a checksum. It has to be represented as a lower cased hexadecimal string. Something like this will be an output: 2.3. Adding the Signature to the Request The final step of the Escher signing process is adding the Signature to the request. Escher adds a new header to the request, by default, the header name is X-Escher-Auth. The header value will include the algorithm ID (see 2.2.1.), the client key, the short date and the credential scope (see 2.2.3.), the signed headers string (see 2.1.5.) and finally the signature (see 2.2.7.). The values of this inputs have to be concatenated like this: 3. Presigning a URL The URL presigning process is very similar to the request signing procedure. But for a URL, there are no headers, no request body, so the calculation of the Signature is different. Also, the Signature cannot be added to the headers, but is included as query parameters. A significant difference is that the presigning allows defining an expiration time. By default, it is 86400 secs, 24 hours. The current time and the expiration time will be included in the URL, and the server has to check if the URL is expired. 3.1. Canonicalizing the URL to Presign The canonicalization for URL presigning is the same process as for HTTP requests, in this section we will cover the differences only. 3.1.1. The HTTP method The HTTP method for presigned URLs is fixed to: For example: 3.1.3. The Query String The query is coming from the URL, but the algorithm, credentials, date, expiration time, and signed headers have to be added to the query parts. 3.1.4. The Headers A URL has no headers, Escher creates the Host header based on the URL’s domain information, and adds it to the canonicalized request. For example: 3.1.5. Signed Headers It will be host, as that’s the only header included. Example:
Package hord provides a simple and extensible interface for interacting with various database systems in a uniform way. Hord is designed to be a database-agnostic library that provides a common interface for interacting with different database systems. It allows developers to write code that is decoupled from the underlying database technology, making it easier to switch between databases or support multiple databases in the same application. To use Hord, import it as follows: To create a database client, you need to import and use the appropriate driver package along with the `hord` package. For example, to use the Redis driver: Each driver provides its own `Dial` function to establish a connection to the database. Refer to the specific driver documentation for more details. Once you have a database client, you can use it to perform various database operations. The API is consistent across different drivers. Refer to the `hord.Database` interface documentation for a complete list of available methods. Hord provides common error types and constants for consistent error handling across drivers. Refer to the `hord` package documentation for more information on error handling. Contributions to Hord are welcome! If you want to add support for a new database driver or improve the existing codebase, please refer to the contribution guidelines in the project's repository.
Package httplog logs http requests and responses. It’s highly configurable, e.g. in production, log all response and requests, but don’t log the body or headers, in your dev environment log everything and so on. httplog also has different ways to log depending on your preference — structured logging via JSON, relational database logging or just plain standard library logging. httplog has logic to turn on/off logging based on options you can either pass in to the middleware handler or from a JSON input file included with the library. httplog offers three middleware choices, each of which adhere to fairly common middleware patterns: a simple HandlerFunc (`LogHandlerFunc`), a function (`LogHandler`) that takes a handler and returns a handler (aka Constructor) (`func (http.Handler) http.Handler`) often used with alice (https://github.com/justinas/alice) and finally a function (`LogAdapter`) that returns an Adapter type. An `httplog.Adapt` function and `httplog.Adapter` type are provided. Beyond logging request and response elements, httplog creates a unique id for each incoming request (using xid (https://github.com/rs/xid)) and sets it (and a few other key request elements) into the request context. You can access these context items using provided helper functions, including a function that returns an audit struct you can add to response payloads that provide clients with helpful information for support. !!!!WARNING!!!! - This package works, but is something I wrote a long time ago and really needs to be updated. I logged Issue #8 to some day address this.
Package rds provides the client and types for making API requests to Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Operations.html). For the alphabetical list of data types, see Data Types (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Types.html). For a list of common query parameters, see Common Parameters (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonParameters.html). For descriptions of the error codes, see Common Errors (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html). Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html#Welcome.Interfaces). For more information about how to use the Query API, see Using the Query API (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Using_the_Query_API.html). See https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31 for more information on this service. See rds package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/ To Amazon Relational Database Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Relational Database Service client RDS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/#New The rdsutil package's BuildAuthToken function provides a connection authentication token builder. Given an endpoint of the RDS database, AWS region, DB user, and AWS credentials the function will create an presigned URL to use as the authentication token for the database's connection. The following example shows how to use BuildAuthToken to create an authentication token for connecting to a MySQL database in RDS. See rdsutil package for more information. http://docs.aws.amazon.com/sdk-for-go/api/service/rds/rdsutils/
Package database is the database client wrapper
Package osquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). osquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `osquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. osquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `osquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: osquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `osquery` will generate an array even if there's only one query object.
Package clustersql is an SQL "meta"-Driver - A clustering, implementation- agnostic wrapper for any backend implementing "database/sql/driver". It does (latency-based) load-balancing and error-recovery over the registered set of nodes. It is assumed that database-state is transparently replicated over all nodes by some database-side clustering solution. This driver ONLY handles the client side of such a cluster. This package simply multiplexes the driver.Open() function of sql/driver to every attached node. The function is called on each node, returning the first successfully opened connection. (Any connections opening subsequently will be closed.) If opening does not succeed for any node, the latest error gets returned. Any other errors will be masked by default. However, any given latest error for any attached node will remain exposed through expvar, as well as some basic counters and timestamps. To make use of this kind of clustering, use this package with any backend driver implementing "database/sql/driver" like so: There is currently no way around instanciating the backend driver explicitly You can perform backend-driver specific settings such as Create a new clustering driver with the backend driver Add nodes, including driver-specific name format, in this case Go-MySQL DSN. Here, we add three nodes belonging to a galera (https://mariadb.com/kb/en/mariadb/documentation/replication-cluster-multi-master/galera/) cluster Make the clusterDriver available to the go sql interface under an arbitrary name Open the registered clusterDriver with an arbitrary DSN string (not used) Continue to use the sql interface as documented at http://golang.org/pkg/database/sql/ Before using this in production, you should configure your cluster details in config.toml and run Note however, that non-failure of the above is no guarantee for a correctly set-up cluster. Finally, you SHOULD set db.MaxIdleConns and db.MaxOpenConns to a non-zero value. Although the sql driver usually does a good job of doing its own pooling, file descriptors can leak in corner cases (of which this library might constitue an example).
Package otgorm allows for the wrapping of GORM calls to databases with OpenTelemetry tracing spans. You only need to create your GORM db client and pass that into otgorm.WithContext along with context.Context(). If there is a parent span referenced within the context the GORM call will be a child span.
Package pgconn is a low-level PostgreSQL database driver. pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at nearly the same level is the C library libpq. Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for libpq style environment variables. ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method reads all rows into memory. Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query result. The ReadAll method reads all query results into memory. All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the method immediately returns. In most circumstances, this will close the underlying connection. The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the client to abort.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/lib/pq/example/listen.
Package lq implements a spatial database which stores objects each of which is associated with a 2D point (a location in a 2D space). The points serve as the "search key" for the associated object. It is intended to efficiently answer "circle inclusion" queries, also known as "range queries": basically questions like: Which objects are within a radius R of the location L? In this context, "efficiently" means significantly faster than the naive, brute force O(n) testing of all known points. Additionally it is assumed that the objects move along unpredictable paths, so that extensive preprocessing (for example, constructing a Delaunay triangulation of the point set) may not be practical. The implementation is a "bin lattice": a 2D rectangular array of brick-shaped (rectangles) regions of space. Each region is represented by a pointer to a (possibly empty) doubly-linked list of objects. All of these sub-bricks are the same size. All bricks are aligned with the global coordinate axes. Terminology used here: the region of space associated with a bin is called a sub-brick. The collection of all sub-bricks is called the super-brick. The super-brick should be specified to surround the region of space in which (almost) all the key-points will exist. If key-points move outside the super-brick everything will continue to work, but without the speed advantage provided by the spatial subdivision. For more details about how to specify the super-brick's position, size and subdivisions see NewDB below. Overview of usage: an application using this facility to perform locality queries over objects of type myStruct would first create a database with: Then, call Attach for each objects to attach to the database. Attach returns a 'proxy' object, which is a link between the user object and its representation in the locality database. When a client object moves, the application calls Update with the new location. Update is a method of the lq.Proxy object, that's why the the proxy object is generally kept within the user object, though it can be managed separately: To perform a query, DB.ForEachWithinRadius is passed a user function which will be called for all client objects in the locality. See Func below for more detail. The DB.FindNearestInRadius function can be used to find a single nearest neighbor using the database. Note that "locality query" is also known as neighborhood query, neighborhood search, near neighbor search, and range query. Author: Aurélien Rainone Based on original work of: Craig Reynolds
Package rds provides the client and types for making API requests to Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Operations.html). For the alphabetical list of data types, see Data Types (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_Types.html). For a list of common query parameters, see Common Parameters (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonParameters.html). For descriptions of the error codes, see Common Errors (http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html). Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html#Welcome.Interfaces). For more information about how to use the Query API, see Using the Query API (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Using_the_Query_API.html). See https://docs.aws.amazon.com/goto/WebAPI/rds-2014-10-31 for more information on this service. See rds package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/ To Amazon Relational Database Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Relational Database Service client RDS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/rds/#New The rdsutil package's BuildAuthToken function provides a connection authentication token builder. Given an endpoint of the RDS database, AWS region, DB user, and AWS credentials the function will create an presigned URL to use as the authentication token for the database's connection. The following example shows how to use BuildAuthToken to create an authentication token for connecting to a MySQL database in RDS. See rdsutil package for more information. http://docs.aws.amazon.com/sdk-for-go/api/service/rds/rdsutils/
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package migration automatically handles versioning of a database schema by applying a series of migrations supplied by the client. It uses features only from the database/sql package, so it tries to be driver independent. However, to track the version of the database, it is necessary to execute some SQL. I've made an effort to keep those queries simple, but if they don't work with your database, you may override them. This package works by applying a series of migrations to a database. Once a migration is created, it should never be changed. Every time a database is opened with this package, all necessary migrations are executed in a single transaction. If any part of the process fails, an error is returned and the transaction is rolled back so that the database is left untouched. (Note that for this to be useful, you'll need to use a database that supports rolling back changes to your schema. Notably, MySQL does not support this, although SQLite and PostgreSQL do.) The version of a database is defined as the number of migrations applied to it.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If the query is LWT and the configured RetryPolicy additionally implements LWTRetryPolicy interface, then the policy will be cast to LWTRetryPolicy and used this way. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package opaque implements OPAQUE, an asymmetric password-authenticated key exchange protocol that is secure against pre-computation attacks. It enables a client to authenticate to a server without ever revealing its password to the server. Protocol details can be found on the IETF RFC page (https://datatracker.ietf.org/doc/draft-irtf-cfrg-opaque) and on the GitHub specification repository (https://github.com/cfrg/draft-irtf-cfrg-opaque). Example_Configuration shows how to instantiate a configuration, which is used to initialize clients and servers from. Configurations MUST remain the same for a given client between sessions, or the client won't be able to execute the protocol. Configurations can be serialized and deserialized, if you need to save, hardcode, or transmit it. Example_Deserialization demonstrates a couple of ways to deserialize OPAQUE protocol messages. Message interpretation depends on the configuration context it's exchanged in. Hence, we need the corresponding configuration. We can then directly deserialize messages from a Configuration or pass them to Client or Server instances which can do it as well. You must know in advance what message you are expecting, and call the appropriate deserialization function. Example_FakeResponse shows how to counter some client enumeration attacks by faking an existing client entry. Precompute the fake client record, and return it when no valid record was found. Use this with the server's LoginInit function whenever a client wants to retrieve an envelope but a client entry does not exist. Failing to do so results in an attacker being able to enumerate users. Example_LoginKeyExchange demonstrates in a single function the interactions between a client and a server for the login phase. This is of course a proof-of-concept demonstration, as client and server execute separately. Example_Registration demonstrates in a single function the interactions between a client and a server for the registration phase. This is of course a proof-of-concept demonstration, as client and server execute separately. The server outputs a ClientRecord and the credential identifier. The latter is a unique identifier for a given client (e.g. database entry ID), and that must absolutely stay the same for the whole client existence and never be reused. Example_ServerSetup shows how to set up the long term values for the OPAQUE server. - The secret OPRF seed can be unique for each client or the same for all, but must be the same for a given client between registration and all login sessions. - The AKE key pair can also be the same for all clients or unique, but must be the same for a given client between registration and all login sessions.
Package cache provides a Hord database driver for a variety of caching strategies. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with the cache. Hord provides a Setup() function for preparing a database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with the cache, with easy-to-use methods such as Get() and Set() to read and write values.
Package nats provides a Hord database driver for the NATS key-value store. The NATS driver allows interacting with the NATS key-value store, which is a distributed key-value store built on top of the NATS messaging system. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with the NATS driver. Hord provides a Setup() function for preparing the database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with the NATS driver, with easy-to-use methods such as Get() and Set() to read and write values. Here are some examples demonstrating common usage patterns for the NATS driver.
Package cassandra provides a Hord database driver for Cassandra. Cassandra is a highly scalable, distributed database designed to handle large amounts of data across many commodity servers. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with Cassandra. Hord provides a Setup() function for preparing a database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with Cassandra, with easy-to-use methods such as Get() and Set() to read and write values.
Package lookaside provides a Hord database driver for a look-aside cache. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with the cache. Hord provides a Setup() function for preparing a database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with the cache, with easy-to-use methods such as Get() and Set() to read and write values.
Package bbolt provides a Hord database driver for BoltDB. BoltDB is an embedded key-value database that persists data on disk. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with BoltDB. Hord provides a Setup() function for preparing a database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with BoltDB, with easy-to-use methods such as Get() and Set() to read and write values.
Package hashmap provides a Hord database driver for an in-memory hashmap. The Hashmap driver is a simple, in-memory key-value store that stores data in a hashmap structure. To use this driver, import it as follows: Use the Dial() function to create a new client for interacting with the hashmap driver. Hord provides a Setup() function for preparing a database. This function is safe to execute after every Dial(). Hord provides a simple abstraction for working with the hashmap driver, with easy-to-use methods such as Get() and Set() to read and write values.