Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: This package will carefully use 'package unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=codec.safe ...'. This library works with both the standard `gc` and the `gccgo` compilers. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. There is also seamless support provided for registering an extension (with a tag) but letting the encoding mechanism default to the standard way. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'codec.safe' to run tests or build in safe mode. e.g. Running Benchmarks Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more. For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed: If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead. Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array. FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice. Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag: Embedded structs will inline that struct's fields into the parent's: The same rules for field naming apply when a struct is passed into FlatCmd as an argument. Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method. There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example). EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster. All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools. All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier. Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used. Use the golang.org/x/xerrors package if you're using an older version of go. Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests. Note that only commands which do not block are eligible for implicit pipelining. See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature. For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY | string | string ------------------------------------------------------------------------------------------------------------------- OBJECT | string | string ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For exmaple, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For exmaple, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: ## Using PUT The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. ## Using GET The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: ## Specifying temporary directory for encryption and compression Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example:
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package dom provides Go bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already usable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behavior, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document.
Package pargo provides functions and data structures for expressing parallel algorithms. While Go is primarily designed for concurrent programming, it is also usable to some extent for parallel programming, and this library provides convenience functionality to turn otherwise sequential algorithms into parallel algorithms, with the goal to improve performance. For documentation that provides a more structured overview than is possible with Godoc, see the wiki at https://github.com/exascience/pargo/wiki Pargo provides the following subpackages: pargo/parallel provides simple functions for executing series of thunks or predicates, as well as thunks, predicates, or reducers over ranges in parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/speculative provides speculative implementations of most of the functions from pargo/parallel. These implementations not only execute in parallel, but also attempt to terminate early as soon as the final result is known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/sequential provides sequential implementations of all functions from pargo/parallel, for testing and debugging purposes. pargo/sort provides parallel sorting algorithms. pargo/sync provides an efficient parallel map implementation. pargo/pipeline provides functions and data structures to construct and execute parallel pipelines. Pargo has been influenced to various extents by ideas from Cilk, Threading Building Blocks, and Java's java.util.concurrent and java.util.stream packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some theoretical background, and the sample chapter at https://mitpress.mit.edu/books/introduction-algorithms for a more practical overview of the underlying concepts.
Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more. For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed: If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead. Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array. FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice. Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag: Embedded structs will inline that struct's fields into the parent's: The same rules for field naming apply when a struct is passed into FlatCmd as an argument. Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method. There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example). EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster. All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools. All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier. Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used. Use the golang.org/x/xerrors package if you're using an older version of go. Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests. Note that only commands which do not block are eligible for implicit pipelining. See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature. For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package sessions provides tools to manage cookie-based web sessions. Special emphasis is placed on security by implementing OWASP recommendations, specifically the following features: In addition, the package provides the following functionality: While simple to use, the package offers a number of extensively documented configuration variables. It also does not assume specific backend technologies. That is, any session storage system may be used simply by implementing the PersistenceLayer interface (or parts of it). This package is currently not written to be run on multiple machines in a distributed fashion without a load balancer that implements sticky sessions. This may change in the future. Although some more configuration needs to happen for production readiness, the package's defaults allow you to get started very quickly. To get access to the current session, simply call Start(): By providing "true" instead of "false" to the Start() function, you can force the creation of a session, even if there previously was none. Once you have a session, you can identify a user across multiple HTTP requests. You may add values to the session, attach a user to it, cause its session ID to change, or destroy it again. For more extensive user-centered functions (for example, signing up, logging in and out, changing passwords etc.), see the subdirectory "users". Before putting your application into production, you must implement the NewSessionCookie function: You may choose a different expiry date, domain, and path but the other fields are mandatory (given that you are using TLS which you certainly should). You can change the name of the cookie by changing the SessionCookie variable. The default is the inconspicuous string "id". The following timeout values may be adjusted according to the requirements of your application: To further reduce the risk of session hijacking attacks, this package checks client IP addresses as well as user agent strings and destroys sessions if changes in these properties were detected. Refer to the AcceptRemoteIP and AcceptChangingUserAgent variables for more information. Sessions are stored in a local RAM cache (which is a simpe map) whose size is defined by the MaxSessionCacheSize variable. If you set this variable to 0, no sessions are held locally. The SessionCacheExpiry controls when a session will be purged from the cache based on the last time it was used. The cache is write-through (except for session last access times). That is, every time a change was made to a session, that change is forwarded to the package's persistence layer to be saved. The persistence layer is a collection of functions which allow the storage and retrieval of objects from a permanent data store. For example, you may use an SQL database or a key-value store. See the documentation of PersistenceLayer for details on the functions to be implemented. If you need to implement only some of the functions, you may use ExtendablePersistenceLayer instead of creating your own class. The package default is to do nothing. That is, sessions are not persisted and therefore will get lost when purged from the local cache or when the application exits. Session objects implement gob.GobEncoder/gob.GobDecoder and json.Marshaler/json.Unmarshaler. While encoding to JSON allows you to easily inspect session attributes in your database, GOB serialization is preferred as it will restore session objects precisely. (For example, the JSON package always unmarshals numbers into floats even if they were originally integers.) It is recommended that you purge your data store from expired sessions from time to time, e.g. by using a cron job, because users may abandon your website which will leave old sessions in your store. It is recommended to call PurgeSessions() before exiting the program. This will cause session last access times to be updated. This package provides a number of utility functions which may be useful in the context of session and user management. The CUID() function generates Base-62 "compact unique identifiers" suitable for user IDs. The RandomID() function generates random Base-62 strings of any length. The ReasonablePassword() function checks the strength of a password based on the recommendations of NIST SP 800-63B.
Package dotmatrix encodes images in a "dot matrix" pattern using braille unicode characters. Images are first converted to monochrome, then each 2x4 pixel block is coded to an 8-dot braille character. In this fashion, an image's entire pixel set can be mapped, one-by-one, to either a "filled" or "unfilled" braille dot. The resulting braille symbols are arranged as lines of text to form a representation of the original image. The encoded image is lossy in the sense that color information is reduced to monochrome with diffusion, but otherwise the resulting text-based image is pixel perfect. In other words, if a pure monochrome image is used as input, there will be no loss of information. https://en.wikipedia.org/wiki/Braille_Patterns⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
Package pargo provides functions and data structures for expressing parallel algorithms. While Go is primarily designed for concurrent programming, it is also usable to some extent for parallel programming, and this library provides convenience functionality to turn otherwise sequential algorithms into parallel algorithms, with the goal to improve performance. For documentation that provides a more structured overview than is possible with Godoc, see the wiki at https://github.com/exascience/pargo/wiki Pargo provides the following subpackages: pargo/parallel provides simple functions for executing series of thunks or predicates, as well as thunks, predicates, or reducers over ranges in parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/speculative provides speculative implementations of most of the functions from pargo/parallel. These implementations not only execute in parallel, but also attempt to terminate early as soon as the final result is known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/sequential provides sequential implementations of all functions from pargo/parallel, for testing and debugging purposes. pargo/sort provides parallel sorting algorithms. pargo/sync provides an efficient parallel map implementation. pargo/pipeline provides functions and data structures to construct and execute parallel pipelines. Pargo has been influenced to various extents by ideas from Cilk, Threading Building Blocks, and Java's java.util.concurrent and java.util.stream packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some theoretical background, and the sample chapter at https://mitpress.mit.edu/books/introduction-algorithms for a more practical overview of the underlying concepts.
Package pargo provides functions and data structures for expressing parallel algorithms. While Go is primarily designed for concurrent programming, it is also usable to some extent for parallel programming, and this library provides convenience functionality to turn otherwise sequential algorithms into parallel algorithms, with the goal to improve performance. For documentation that provides a more structured overview than is possible with Godoc, see the wiki at https://github.com/exascience/pargo/wiki Pargo provides the following subpackages: pargo/parallel provides simple functions for executing series of thunks or predicates, as well as thunks, predicates, or reducers over ranges in parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/speculative provides speculative implementations of most of the functions from pargo/parallel. These implementations not only execute in parallel, but also attempt to terminate early as soon as the final result is known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/sequential provides sequential implementations of all functions from pargo/parallel, for testing and debugging purposes. pargo/sort provides parallel sorting algorithms. pargo/sync provides an efficient parallel map implementation. pargo/pipeline provides functions and data structures to construct and execute parallel pipelines. Pargo has been influenced to various extents by ideas from Cilk, Threading Building Blocks, and Java's java.util.concurrent and java.util.stream packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some theoretical background, and the sample chapter at https://mitpress.mit.edu/books/introduction-algorithms for a more practical overview of the underlying concepts.
Package amt implements the Hash Array Mapped Trie (HAMT) in Go (1.18+ generics). See "Ideal Hash Trees" (Phil Bagwell, 2001) for an overview of the implementation, advantages, and disadvantages of HAMTs. The AMT implementation has a natural cardinality of 16 for the root trie and all sub-tries; each AMT level is indexed by 4 hash bits. The depth of a map or set will be on the order of log16(N). This package uses unsafe pointers/pointer-arithmetic extensively, so it is inherently unsafe and not guaranteed to work today or tomorrow. Unsafe pointers enable a compact memory layout with fewer allocations, and effectively reduce the depth of a map or set by reducing the number of pointers dereferenced along the path to a key or value. No attention is paid to 32-bit architectures since it's now the year 2000, but compatibility may still be there. An alternative approach, using an interface type to represent either a key-value pair or entry slice (sub-trie), has a few drawbacks. Interface values are the size of 2 pointers (versus 1 when using unsafe pointers), which would increase the memory overhead for key-value/sub-trie entries by 50% (e.g. 24 bytes versus 16 bytes on 64-bit architectures). If the interface value is assigned a slice of entries (sub-trie), a new allocation (the size of 3 pointers) is required for the slice-header before it can be wrapped into the interface value. Accessing an entry slice (sub-trie) through an interface value requires (1) dereferencing the interface's data pointer to get to the slice-header (among other things), then (2) dereferencing the slice-header's data pointer to access an entry in the slice. Unsafe pointers eliminate the extra allocation and overhead of (1), allowing entries to point directly to either a key-value struct or an array of entries. Generics enable a type-safe implementation, where the key-value type of a map or set is fixed after instantiation. The memory layouts of Go interfaces and slices are detailed in the following articles:
Package triemap allows creating a map out of `[]rune` and `[]byte` without casting with `string(v)`. Due to lack of generics the values are stored and retrieved as `interface{}` and require type asserting like `m.Get(k).(Foo)`. Compared to casting a rune or byte slice to a string, putting data in the map is actually better using the std map like `m[string(k)] = v`, however getting data out of the map is much faster with the `RuneSliceMap`. `ByteSliceMap` is always slower than casting to string and using a standard map, but it reduces allocations, so it's a decent alternative on memory-constrained resources. `RuneSliceMap` performs much faster and produces less allocations than casting to string and using standard maps.
Package form implements primitives that reduce form boilerplate by allowing the caller to specify their fields exactly once. All values are processed via a chain of transformations that map text into a structured value, and visa versa. Each transformation is encapsulated in a `form.Value` implementation, for instance a `value.Int` will transform text into a Go integer and signal any errors that occur during that transformation. Forms are initialized once with all the fields via a call to `form.Load`. Each field binds an input to a value. By contention, value objects depend on pointer variables, this means you can simply point into a predefined "model" struct. Once the form is submitted, the model will contain the validated values ready to use. However this is only a convention, a value object can arbitrarily handle it's internal state. The following is an example of one way to use the form:
Package goterator provides map and reduce functionality of iterators
Package par provides utilities for parallelizing computations. Most implementations are built on parallelization via partitioning, i.e. data is divided into partitions, the partitions are mapped to intermediate representations in parallel, then the intermediate representations are combined (reduced) in parallel (where possible) to produce the desired result. This approach to parallelization provides a few key benefits: As with every performance-oriented tool, measure before applying. Most of the provided functionality is only beneficial if the datasets are large enough or the computations are expensive.
Package store provides a disk-backed data structure for use in storing []byte values referenced by 128 bit keys with options for replication. It can handle billions of keys (as memory allows) and full concurrent access across many cores. All location information about each key is stored in memory for speed, but values are stored on disk with the exception of recently written data being buffered first and batched to disk later. This has been written with SSDs in mind, but spinning drives should work also; though storing toc files (Table Of Contents, key location information) on a separate disk from values files is recommended in that case. Each key is two 64bit values, known as keyA and keyB uint64 values. These are usually created by a hashing function of the key name, but that duty is left outside this package. Each modification is recorded with an int64 timestamp that is the number of microseconds since the Unix epoch (see github.com/gholt/brimtime.TimeToUnixMicro). With a write and delete for the exact same timestamp, the delete wins. This allows a delete to be issued for a specific write without fear of deleting any newer write. Internally, each modification is stored with a uint64 timestamp that is equivalent to (brimtime.TimeToUnixMicro(time.Now())<<8) with the lowest 8 bits used to indicate deletions and other bookkeeping items. This means that the allowable time range is 1970-01-01 00:00:00 +0000 UTC (+1 microsecond because all zeroes indicates a missing item) to 4253-05-31 22:20:37.927935 +0000 UTC. There are constants TIMESTAMPMICRO_MIN and TIMESTAMPMICRO_MAX available for bounding usage. There are background tasks for: * TombstoneDiscard: This will discard older tombstones (deletion markers). Tombstones are kept for Config.TombstoneAge seconds and are used to ensure a replicated older value doesn't resurrect a deleted value. But, keeping all tombstones for all time is a waste of resources, so they are discarded over time. Config.TombstoneAge controls how long they should be kept and should be set to an amount greater than several replication passes. * PullReplication: This will continually send out pull replication requests for all the partitions the ValueStore is responsible for, as determined by the Config.MsgRing. The other responsible parties will respond to these requests with data they have that was missing from the pull replication request. Bloom filters are used to reduce bandwidth which has the downside that a very small percentage of items may be missed each pass. A moving salt is used with each bloom filter so that after a few passes there is an exceptionally high probability that all items will be accounted for. * PushReplication: This will continually send out any data for any partitions the ValueStore is *not* responsible for, as determined by the Config.MsgRing. The responsible parties will respond to these requests with acknowledgements of the data they received, allowing the requester to discard the out of place data. * Compaction: TODO description. * Audit: This will verify the data on disk has not been corrupted. It will slowly read data over time and validate checksums. If it finds issues, it will try to remove affected entries the in-memory location map so that replication from other stores will send the information they have and the values will get re-stored locally. In cases where the affected entries cannot be determined, it will make a callback requesting the store be shutdown and restarted; this restart will result in the affected keys being missing and therefore replicated in by other stores. Note that if the disk gets filled past a configurable threshold, any external writes other than deletes will result in error. Internal writes such as compaction and removing successfully push-replicated data will continue. There is also a modified form of ValueStore called GroupStore that expands the primary key to two 128 bit keys and offers a Lookup method which retrieves all matching items for the first key.
Package schemax provides methods, types and bidirectional marshaling functionality intended for use in the area of X.501/LDAP directory schema abstraction. Types provided by this package are created based on the precepts of RFC2252 and RFC4512, as are the associated parsing techniques (e.g.: 'qdescrs', etc.). A variety of methods and functions exist to make handling this data simpler. LDAP directories contain and serve data in a hierarchical manner. The syntax and evaluation of this data is governed by a schema, which itself is hierarchical in nature. The nature of this package's operation is highly referential. Objects are referenced via pointers, and these objects can inhabit multiple other multi-valued types. For example, *AttributeType instances are stored within a type-specific slices called Collections. Each *AttributeType that exists can be referenced by other *AttributeType instances (in a scenario where "super typing" is in effect), or by other *ObjectClass instances via their own PermittedAttributeTypes (May) and RequiredAttributeTypes (Must) list types. Literal "copies" of single objects are never made. References are always through pointers. This package is primarily intended for any architect, developer or analyst looking to do one or more of the following: • Parse (Marshal) textual LDAP schema definitions into objects • Unmarshal objects into textual LDAP schema definitions • Work with data models and structures that may represent or contain some facet of a schema This package aims to provide a fast, reliable, standards-compliant parsing routine for LDAP schema definitions into discrete, useful objects. Parsing of raw values during the Marshal operation is conducted without the use of the regexp package, nor any facility based on regular expressions. Instead, precise byte-for-byte parsing/truncation of raw definitions is conducted, during which time known flags (or labels) are identified for specialized handling (e.g.: NAME). For successful parsing, each definition must be limited to a single line when an entire schema file is streamed line-by-line. Future releases of this package will relax this requirement. However, when marshaling single definitions, multi-line definitions are supported. Line feeds are removed outright, and recurring WHSP characters (spaces/tabs) are reduced to single spaces between fields. When POPULATING various collection types (e.g.: slices of definitions), the following "order of operations" MUST be honored: 1. LDAPSyntaxes 2. MatchingRules 3. AttributeTypes 4. MatchingRuleUses 5. ObjectClasses 6. DITContentRules 7. NameForms 8. DITStructureRules Individually, collections containing the above elements should also be populated in order of referential superiority. For example, all independent instances should be populated before those that depend upon them, such as in the cases of sub-types, sub-classes and sub-rules. An obvious real-world example of this is for the 'name' attribute (per RFC4519), which is a super-type of several other key attribute types. In such a case, those definitions that depend (or are based) upon the 'name' attribute WILL NOT MARSHAL until 'name' has been marshaled itself. This logic applies no matter how the definitions are being received (or "read"), and applies whether or not the all-encompassing Subschema type is used. In short: MIND YOUR ORDERING. Iterating a collection type, such as AttributeTypeCollection instances, MUST be done using the Len() collection method, e.g.: Iteration will always returns collection members in FIFO ordering (First-In/First-Out). Within subdirectories of this package are popular Go implementations of standard Attribute Types, Object Classes, Matching Rules and LDAP Syntaxes from RFCs that are recognized (almost) universally. This includes (but is not limited to) content from RFC4512, RFC4519 and RFC2307. Included in each subdirectory is an unmodified text copy of the Internet-Draft from which the relevant definitions originate. The user is advised that some LDAP implementations have certain attribute types and object classes "built-in" and are not sourced from a schema file like others (rather they likely are compiled-in to the product). This varies between implementations and, as such, inconsistencies may arise for someone using this product across various directory products. One size absolutely does not fit all. In such a case, an attempt to marshal a schema file may fail due to unsatisfied super-type or super-class(es) dependencies. To mitigate this, the user must somehow provide the lacking definitions, either by themselves or using one of the subdirectory packages. Also known as OID "aliases", macros allow a succinct expression of an OID prefix by way of a text identifier. As a real-world example, RFC2307 uses the alias "nisSchema" to describe the OID prefix of "1.3.6.1.1.1". This package supports the registration and use of such aliases within the Macros map type. Note that this is an all-or-nothing mechanism. Understand that if a non-nil Macros instance is detected, and unregistered aliases are encountered during a parsing run, normal operations will be impacted. As such, users are advised to anticipate any aliases needed in advance or to abolish their use altogether. OID aliasing supports both dot (.) and colon (:) runes for delimitation, thus 'nisSchema.1.1' and 'nisSchema:1.1' are acceptable. By default, definitions are unmarshaled as single-line string values. This package supports the creation of user-authored closure functions to allow customization of this behavior. As a practical example, users may opt to write a function to convert definitions to individual CSV rows, or perhaps JSON objects -- literally anything, so long as it is a string. Each definition type comes with a default (optional) UnmarshalFunc instance. This, or any custom function, can be invoked by-name as follows: In the above example, the definition would unmarshal as a standard RFC4512 schema definition, but with linebreaks and indenting added. This must be done for EVERY definition being unmarshaled. User-authored functions MUST honor the following function signature (see the DefinitionUnmarshaler type for details). The structure of a given user-authored unmarshaler function will vary, but should generally reflect the example below. For added convenience, each definition includes a Map() method that returns a map[string][]string instance containing the effective contents of said definition. This is useful in situations where the user is more interested in simple access to string values in fields, as opposed to the complicated traversal of pointer instances that may exist within a definition. Naturally, ordering of fields is lost due to use of a map in this fashion. It is up to the consuming application to ensure correct ordering of fields as described in RFC4512 section 4.1, wherever applicable. Each definition supports the optional assignment of a []byte value containing "extended information", which can literally be anything (pre-rendered HTML, or even Graphviz content to name a few potential use cases). This package imposes no restrictions of any kind regarding the nature or length of the assigned byte slice. The main purpose of this functionality is to allow the user to annotate information that goes well beyond the terse DESC field value normally present within definitions.
Package emacs contains infrastructure to write dynamic modules for Emacs in Go. See https://www.gnu.org/software/emacs/manual/html_node/elisp/Dynamic-Modules.html and https://www.gnu.org/software/emacs/manual/html_node/elisp/Writing-Dynamic-Modules.html for background on Emacs modules. To build an Emacs module, you have to build your Go code as a shared C library, e.g., using go build ‑buildmode=c‑shared. If you import the emacs package, the shared library is loadable as an Emacs module. This package contains high-level as well as lower-level functions. The high-level functions help reducing boilerplate when exporting functions to Emacs and calling Emacs functions from Go. The lower-level functions are more type-safe, support more exotic use cases, and have less overhead. At the highest level, use the Export function to export Go functions to Emacs, and the Import function to import Emacs functions so that they can be called from Go. These functions automatically convert between Go and Emacs types as necessary. This export functionality is unrelated to exported Go names or the Cgo export functionality. Functions exported to Emacs don’t have to be exported in the Go or Cgo sense. The automatic type conversion behaves as follows. Go bool values are become the Emacs symbols nil and t. When converting to Go bool, only nil becomes false, any other value becomes true. This matches the Emacs convention that all non-nil values represent a logically true value. Go integral values become Emacs integer values and vice versa. Go floating-point values become Emacs floating-point values and vice versa. Go strings become Emacs strings and vice versa. Go []byte arrays and slices become Emacs unibyte strings. Emacs unibyte strings become Go []byte slices. Other Go arrays and slices become Emacs vectors. Emacs vectors become Go slices. Go maps become Emacs hash tables and vice versa. All types that implement In can be converted to Emacs. All types that implement Out can be converted from Emacs. You can implement In or Out yourself to extend the type conversion machinery. A reflect.Value behaves like its underlying value. Functions exported via Export don’t have a documentation string by default. To add one, pass a Doc value to Export. Since argument names aren’t available at runtime, the documentation by default lacks argument names. Use Usage to add argument names. As an alternative to Import, you can call functions directly using [Invoke]. [Invoke] uses the same autoconversion rules as Import, but allows you to specify an arbitrary function value. At a slightly lower level, you can use [Call] and [CallOut] to call Emacs functions. These functions use the In and Out interfaces to convert from and to Emacs values. The primary disadvantage of this approach is that you can’t use primitive types like int or string directly. Use wrapper types like Int and String instead. On the other hand, [Call] and [CallOut] are more type-safe than [Invoke]. If you use [Call] or [CallOut], the compiler will detect unsupported types. By contrast, when using Export, Import, or [Invoke], they will only be detected at runtime and cause runtime panics or errors. To reduce boilerplate when using [Call] and [CallOut], this package contains several convenience types that implement In or Out. Most primitive types have corresponding wrapper types, such as Int, Float, or String. Types such as List, Cons, or Hash allow you to pass common Lisp structures without much boilerplate. There are also some destructuring types such as ListOut or Uncons. At an even lower level, you can use ExportFunc, ImportFunc, and [Funcall] as alternatives to Export, Import, and [Call], respectively. They have the same behavior, but don’t do any type conversion at all. The fundamental types for interacting with Emacs are Env and Value. They represent Emacs module environments and values as described in https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Functions.html. These types are opaque, and their zero values are invalid. You can’t use Env and Value values once they are no longer live. This is described in https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Functions.html and https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Values.html. As a best practice, don’t let these values escape exported functions. You also can’t interact with Emacs from other threads, cf. https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Functions.html. These rules are a bit subtle, but you are usually on the safe side if you don’t store Env and Value values in struct fields or global variables, and don’t pass them to other goroutines. All functions in this package translate between Go errors and Emacs nonlocal exits. See https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Nonlocal.html. This package represents Emacs nonlocal exits as ordinary Go errors. Each call to a function fetches and clears nonlocal exit information after the actual call and converts it to an error of type Signal or Throw. This means that the Go bindings don’t exhibit the saturating error behavior described at https://www.gnu.org/software/emacs/manual/html_node/elisp/Module-Nonlocal.html. Instead, they behave like normal Go functions: an erroneous return doesn’t affect future function calls. When returning from an exported function, this package converts errors back to Emacs nonlocal exits. If you return a Signal or Error, Emacs will raise a signal using the signal function. If you return a Throw, Emacs will throw to a catch using the throw function. If you return any other type of error, Emacs will signal an error of type go‑error, with the error string as signal data. You can define your own error symbols using DefineError. There are also a couple of factory functions for builtin errors such as WrongTypeArgument and OverflowError. You can use Var to define a dynamic variable. This package intentionally doesn’t support wrapping pointers to arbitrary Go values in Emacs user pointer objects. Attempting to do that wouldn’t work well with Go’s garbage collection and CGo’s pointer-passing rules; see https://pkg.go.dev/cmd/cgo#hdr-Passing_pointers. Instead, prefer using handles, e.g. simple integers as map keys. See the “Handles” example. A long-running operation should periodically call [ProcessInput] to process pending input and to check whether the user wants to quit the operation. If so, you should cancel the operation as soon as possible. See the documentation of [ProcessInput] for a concrete example. As an alternative, this package provides limited support for asynchronous operations. Such operations are represented using the AsyncHandle type. You can use the Async type to create and manage asynchronous operations. Async requires a way to notify Emacs about a pending asynchronous result; this package supports notification using pipes or sockets. If you want to run code while Emacs is loading the module, use OnInit to register initialization functions. Loading the module will call all initialization functions in order. You can use ERTTest to define ERT tests backed by Go functions. This works similar to Export, but defines ERT tests instead of functions.
Package hashring implements consistent hashing hashring data structure. In general, consistent hashing is all about mapping of object from a very big set of values (e.g. request id) to object from a quite small set (e.g. server address). The word "consistent" means that it can produce consistent mapping on different machines or processes without additional state exchange and communication. For more theory about the subject please see this great document: https://theory.stanford.edu/~tim/s16/l/l1.pdf There are two goals for this hashring implementation: 1) To be efficient in highly concurrent applications by blocking read operations for the least possible time. 2) To correctly handle very rare but yet possible hash collisions, which may break all your eventually consistent application. To reach the first goal hashring uses immutable AVL tree internally, making read operations (getting item for object) blocked only for a tiny amount of time needed to swap the ring's tree root after some write operation (insertion or deletion). The second goal is reached by using ring of size 2^64-1 points, which dramatically reduces the probability of hash collisions (the greater the number of items on the ring, the higher the probability of collisions) and implementation that covers collisions.
Pipeline is a functionnal programming package for the Go language. With Pipeline developpers can use functionnal principles such as map, reduce or filter on their collection types. Pipeline is written in go and inspired by underscore.js , lodash.js and Martin Fowler's pipelines : http://martinfowler.com/articles/collection-pipeline/ author mparaiso <mparaiso@online.fr> copyrights 2014 license GPL-3.0 version 0.1 ## Installating: - Install the Go language - Use 'go get' with a command line interface ## Examples: ### Counting words ```go ``` ### Calculating the total cost of an customer order ```go ``` ## Implemented pipelines - Chunk - Compact - Concat - Difference - Equals - Every - Filter - First - Flatten - GroupBy - Head - IndexOf - Intersection - Last - LastIndexOf - Map - Push - Reduce - ReduceRight - Reverse - Slice - Some - Sort - Splice - Tail - ToMap - Union - Unique - Unshift - Without - Xor - Zip
Package rel implements relational algebra, a set of operations on sets of tuples which result in relations, as defined by E. F. Codd. What folows is a brief introduction to relational algebra. For a more complete introduction, please read C. J. Date's book "Database in Depth". This package uses the same terminology. Relations are sets of named tuples with identical attributes. The primative operations which define the relational algebra are: Union, which adds two sets together. Diff, which removes all elements from one set which exist in another. Restrict, which removes values from a relation that do not satisfy a predicate. Project, which removes zero or more attributes from the tuples the relation is defined on. Rename, which changes the names of the attributes in a relation. Join, which can multiply two relations together (which may have different types of tuples) by returning all combinations of tuples in the two relations where all attributes in one relation are equal to the attributes in the other where the names are the same. This is sometimes called a natural join. This package represents tuples as structs with no unexported or anonymous fields. The fields of the struct are the attributes of the tuple it represents. Attributes are strings with some additional methods that are useful for constructing predicates and candidate keys. They have to be valid field names in go. Predicates are functions which take a tuple and return a boolean, and are used as an input for Restrict expressions. Candidate keys are the sets of attributes which define unique tuples in a relation. Every relation has at least one candidate key, because every relation only contains unique tuples. Some relations may contains several candidate keys. Relations in this package can be either literal, such as a relation from a map of tuples, or an expression of other relations, such as a join between two source relations. Literal Relations can be defined using the rel.New function. Given a slice, map, or channel of tuples, the New function constructs a new "essential" relation, with those values as tuples. Other packages can create essential relations from other sources of data, such as the github.com/jonlawlor/relcsv package, or the github.com/jonlawlor/relsql package. Relational Expressions are generated when one of the methods Project, Restrict, Union, Diff, Join, Rename, Map, or GroupBy. During their construction, the rel package checks to see if they can be distributed over the source relations that they are being called on, and if so, it attempts to push the expressions down the tree of relations as far as they can go, with the end goal of getting pushed all the way to the "essential" source relations. In this way, relational expressions can (hopefully) reduce the amount of computation done in total and / or done in the go runtime.
GKES (Go Kafka Event Source) attempts to fill the gaps ub the Go/Kafka library ecosystem. It supplies Exactly Once Semantics (EOS), local state stores and incremental consumer rebalancing to Go Kafka consumers, making it a viable alternative to a traditional Kafka Streams application written in Java. GKES is Go/Kafka library tailored towards the development of Event Sourcing applications, by providing a high-throughput, low-latency Kafka client framework. Using Kafka transactions, it provides for EOS, data integrity and high availability. If you wish to use GKES as straight Kafka consumer, it will fit the bill as well. Though there are plenty of libraries for that, and researching which best fits your use case is time well spent. GKES is not an all-in-one, do-everything black box. Some elements, in particular the StateStore, have been left without comprehensive implementations. A useful and performant local state store rarely has a flat data structure. If your state store does, there are some convenient implementations provided. However, to achieve optimum performance, you will not only need to write a StateStore implementation, but will also need to understand what the proper data structures are for your use case (trees, heaps, maps, disk-based LSM trees or combinations thereof). You can use the provided github.com/aws/go-kafka-event-source/streams/stores.SimpleStore as a starting point. GKES purposefully does not provide a pre-canned way for exposing StateStore data, other than a producing to another Kafka topic. There are as many ways to vend data as there are web applications. Rather than putting effort into inventing yet another one, GKES provides the mechanisms to query StateStores via Interjections. This mechanism can be plugged into whatever request/response mechanism that suits your use-case (gRPC, RESTful HTTP service...any number of web frameworks already in the Go ecosystem). [TODO: provide a simple http example] For this familiar with thw Kafka Streams API, GKES provides for stream `Punctuators“, but we call them `Interjections` (because it sounds cool). Interjections allow you to insert actions into your EventSource at specicifed interval per partition assigned via streams.EventSource.ScheduleInterjection, or at any time via streams.EventSource.Interject. This is useful for bookeeping activities, aggregated metric production or even error handling. Interjections have full access to the StateStore associated with an EventSource and can interact with output topics like any other EventProcessor. One issue that Kafka conumer applications have long suffered from are latency spikes during a consumer rebalance. The cooperative sticky rebalancing introduced by Kafka and implemented by kgo helps resolve this issue. However, once StateStore are thrown into the mix, things get a bit more complicated because initializing the StateStore on a host invloves consuming a compacted TopicPartion from start to end. GKES solves this with the IncrementalRebalancer and takes it one step further. The IncrementalRebalancer rebalances consumer partitions in a controlled fashion, minimizing latency spikes and limiting the blast of a bad deployment. GKES provides conventions for asynchronously processing events on the same Kafka partition while still maintaining data/stream integrity. The AsyncBatcher and AsyncJobScheduler allow you to split a TopicPartition into sub-streams by key, ensuring all events for a partitcular key are processed in order, allowing for parallel processing on a given TopicPartition. For more details, see Async Processing Examples A Kafka transaction is a powerful tool which allows for Exactly Once Semantics (EOS) by linking a consumer offset commit to one or more records that are being produced by your application (a StateStore record for example). The history of Kafka EOS is a long and complicated one with varied degrees of performance and efficiency. Early iterations required one producer transaction per consumer partition, which was very ineffiecient as Topic with 1000 partitions would also require 1000 clients in order to provide EOS. This has since been addressed, but depending on client implementations, there is a high risk of running into "producer fenced" errors as well as reduced throughput. In a traditional Java Kafka Streams application, transactions are committed according to the auto-commit frequency, which defaults to 100ms. This means that your application will only produce readable records every 100ms per partition. The effect of this is that no matter what you do, your tail latency will be at least 100ms and downstream consumers will receive records in bursts rather than a steady stream. For many use cases, this is unaceptable. GKES solves this issue by using a configurable transactional producer pool and a type of "Nagle's algorithm". Uncommitted offsets are added to the transaction pool in sequence. Once a producer has reach its record limit, or enough time has elapsed (10ms by default), the head transaction will wait for any incomplete events to finsh, then flush and commit. While this transaction is committing, GKES continues to process events and optimistically begins a new transaction and produces records on the next producer in pool. Since trasnaction produce in sequence, there is no danger of commit offset overlap or duplicate message processing in the case of a failure. To ensure EOS, your EventSource must use either the IncrementalRebalancer, or kgos cooperative sticky implementation. Though if you're using a StateStore, IncrementalRebalancer should be used to avoid lengthy periods of inactivity during application deployments. Rather than create yet another Kafka driver, GKES is built on top of kgo. This Kafka client was chosen as it (in our testing) has superior throughput and latency profiles compared to other client libraries currently available to Go developers. One other key adavantage is that it provides a migration path to cooperative consumer rebalancing, required for our EOS implementation. Other Go Kafka libraries provide cooperative rebalancing, but do not allow you to migrate froma non-cooperative rebalancing strategy (range, sticky etc.). This is a major roadblock for existing deployemtns as the only migration paths are an entirely new consumer group, or to bring your application completely down and re-deploy with a new rebalance strategy. These migration plans, to put it mildly, are big challenge for zero-downtime/live applications. The kgo package now makes this migration possible with zero downtime. Kgo also has the proper hooks need to implement the IncrementalGroupRebalancer, which is necessary for safe deployments when using a local state store. Kudos to kgo!
Package pargo provides functions and data structures for expressing parallel algorithms. While Go is primarily designed for concurrent programming, it is also usable to some extent for parallel programming, and this library provides convenience functionality to turn otherwise sequential algorithms into parallel algorithms, with the goal to improve performance. For documentation that provides a more structured overview than is possible with Godoc, see the wiki at https://github.com/exascience/pargo/wiki Pargo provides the following subpackages: pargo/parallel provides simple functions for executing series of thunks or predicates, as well as thunks, predicates, or reducers over ranges in parallel. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/speculative provides speculative implementations of most of the functions from pargo/parallel. These implementations not only execute in parallel, but also attempt to terminate early as soon as the final result is known. See also https://github.com/ExaScience/pargo/wiki/TaskParallelism pargo/sequential provides sequential implementations of all functions from pargo/parallel, for testing and debugging purposes. pargo/sort provides parallel sorting algorithms. pargo/sync provides an efficient parallel map implementation. pargo/pipeline provides functions and data structures to construct and execute parallel pipelines. Pargo has been influenced to various extents by ideas from Cilk, Threading Building Blocks, and Java's java.util.concurrent and java.util.stream packages. See http://supertech.csail.mit.edu/papers/steal.pdf for some theoretical background, and the sample chapter at https://mitpress.mit.edu/books/introduction-algorithms for a more practical overview of the underlying concepts.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package packet provides basic APIs for linux packet socket. This package uses memory mapping to increase performance and reduce system calls during packets reading. Most errors returned by this package are of type syscall.Errno, *os.SyscallError, or others documented by this package. Handler.Fd() is there to help users of this package to implement functionalities that are not provided by this package. It should be used with great care.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package acceptable is a library that handles headers for content negotiation and conditional requests in web applications written in Go. Content negotiation is specified by RFC (http://tools.ietf.org/html/rfc7231) and, less formally, by Ajax (https://en.wikipedia.org/wiki/XMLHttpRequest). * contenttype, headername - bundles of useful constants * data - for holding response data & metadata prior to rendering the response, also allowing lazy evaluation * header - for parsing and representing certain HTTP headers * offer - for enumerating offers to be matched against requests * templates - for rendering Go templates Server-based content negotiation is essentially simple: the user agent sends a request including some preferences (accept headers), then the server selects one of several possible ways of sending the response. Finding the best match depends on you listing your available response representations. This is all rolled up into a simple-to-use function `acceptable.RenderBestMatch`. What this does is described in detail in [RFC-7231](https://tools.ietf.org/html/rfc7231#section-5.3), but it's easy to use in practice. For example The RenderBestMatch function searches for the offer that best matches the request headers. If none match, the response will be 406-Not Acceptable. If you need to have a catch-all case, include offer.Of(p, contenttype.TextAny) or offer.Of(p, contenttype.Any) last in the list. Note that contenttype.TextAny is "text/*" and will typically return "text/plain"; contenttype.Any is "*/*" and will likewise return "application/octet-stream". Each offer will (usually) have a suitable offer.Processor, which is a rendering function. Several are provided (for JSON, XML etc), but you can also provide your own. Also, the templates sub-package provides Go template support. Offers are restricted both by content-type matching and by language matching. The `With` method provides data and specifies its content language. Use it as many times as you need to. The language(s) is matched against the Accept-Language header using the basic prefix algorithm. This means for example that if you specify "en" it will match "en", "en-GB" and everything else beginning with "en-", but if you specify "en-GB", it only matches "en-GB" and "en-GB-*", but won't match "en-US" or even "en". (This implements the basic filtering language matching algorithm defined in https://tools.ietf.org/html/rfc4647.) If your data doesn't need to specify a language, the With method should simply use the "*" wildcard instead. For example, myOffer.With(data, "*") attaches data to myOffer and doesn't restrict the offer to any particular language. The language wildcard could also be used as a catch-all case if it comes after one or more With with a specified language. However, the standard (RFC-7231) advises that a response should be returned even when language matching has failed; RenderBestMatch will do this by picking the first language listed as a fallback, so the catch-all case is only necessary if its data is different to that of the first case. The response data (en and fr above) can be structs, slices, maps, or other values that the rendering processors accept. They will be wrapped as data.Data values, which you can provid explicitly. These allow for lazy evaluation of the content and also support conditional requests. This comes into its own when there are several offers each with their own data model - if these were all to be read from the database before selection of the best match, all but one would be wasted. Lazy evaluation of the selected data easily overcomes this problem. Besides the data and error returned values, some metadata can optionally be returned. This is the basis for easy support for conditional requests (see [RFC-7232](https://tools.ietf.org/html/rfc7232)). If the metadata is nil, it is simply ignored. However, if it contains a hash of the data (e.g. via MD5) known as the entity tag or etag, then the response will have an ETag header. User agents that recognise this will later repeat the request along with an If-None-Match header. If present, If-None-Match is recognised before rendering starts and a successful match will avoid the need for any rendering. Due to the lazy content fetching, it can reduce unnecessary database traffic etc. The metadata can also carry the last-modified timestamp of the data, if this is known. When present, this becomes the Last-Modified header and is checked on subsequent requests using the If-Modified-Since. The template and language parameters are used for templated/web content data; otherwise they are ignored. Sequences of data can also be produced. This is done with data.Sequence() and this takes the same supplier function as used by data.Lazy(). The difference is that, in a sequence, the supplier function will be called repeatedly until its result value is nil. All the values will be streamed in the response (how this is done depends on the rendering processor. Most responses will be UTF-8, sometimes UTF-16. All other character sets (e.g. Windows-1252) are now strongly deprecated. However, legacy support for other character sets is provided. Transcoding is implemented by Match.ApplyHeaders so that the Accept-Charset content negotiation can be implemented. This depends on finding an encoder in golang.org/x/text/encoding/htmlindex (this has an extensive list, however no other encoders are supported). Whenever possible, responses will be UTF-8. Not only is this strongly recommended, it also avoids any transcoding processing overhead. It means for example that "Accept-Charset: iso-8859-1, utf-8" will ignore the iso-8859-1 preference because it can use UTF-8. Conversely, "Accept-Charset: iso-8859-1" will always have to transcode into ISO-8859-1 because there is no UTF-8 option.
Package elasticclient provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). elasticclient alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `elasticclient` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. elasticclient provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `elasticclient`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: elasticclient currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `elasticclient` will generate an array even if there's only one query object.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document.
Package sqlr is designed to reduce the effort required to work with SQL databases. It is intended for programmers who are comfortable with writing SQL, but would like assistance with the sometimes tedious process of preparing SQL queries for tables that have a large number of columns, or have a variable number of input parameters. This GoDoc summary provides an overview of how to use this package. For more detailed documentation, see https://jjeffery.github.io/sqlr. Preparing SQL queries with many placeholder arguments is tedious and error-prone. The following insert query has a dozen placeholders, and it is difficult to match up the columns with the placeholders. It is not uncommon to have tables with many more columns than this example, and the level of difficulty increases with the number of columns in the table. This package uses reflection to simplify the construction of SQL queries. Supplementary information about each database column is stored in the structure tag of the associated field. The calling program creates a schema, which describes rules for generating SQL statements. These rules include specifying the SQL dialect (eg MySQL, Postgres, SQLite) and the naming convention used to convert Go struct field names into column names (eg "GivenName" => "given_name"). The schema is usually created during program initialization. Once created, a schema is immutable and can be called concurrently from multiple goroutines. A session is created using a context, a database connection (eg *sql.DB, *sql.Tx, *sql.Conn), and a schema. A session is inexpensive to create, and is intended to last no longer than a single request (which might be a HTTP request, in the case of a HTTP server). A session is bounded by the lifetime of its context. Once a session has been created, it is possible to create simple row insert/update statements with minimal effort. The Exec method parses the SQL query and replaces occurrences of "{}" with the column names or placeholders that make sense for the SQL clause in which they occur. In the example above, the insert and update statements would look like: If the schema is created with a different dialect then the generated SQL will be different. For example if the Postgres dialect was used the insert and update queries would look more like: Inserting and updating a single row are common enough operations that the session has methods that make it very simple: Select queries can be performed using the session's Select method: The SQL queries prepared in the above example would look like the following: The examples are using a MySQL dialect. If the schema had been setup for, say, a Postgres dialect, a generated query would look more like: It is an important point to note that this feature is not about writing the SQL for the programmer. Rather it is about "filling in the blanks": allowing the programmer to specify as much of the SQL query as they want without having to write the tiresome bits. When inserting rows, if a column is defined as an autoincrement column, then the generated value will be retrieved from the database server, and the corresponding field in the row structure will be updated. Autoincrement column values work for all supported databases (PostgreSQL, MySQL, Microsoft SQL Server and SQLite). Most SQL database tables have columns that are nullable, and it can be tiresome to always map to pointer types or special nullable types such as sql.NullString. In many cases it is acceptable to map the zero value for the field a database NULL in the corresponding database column. Where it is acceptable to map a zero value to a NULL database column, the Go struct field can be marked with the "null" keyword in the field's struct tag. In the above example the `manager_id` column can be null, but if all valid IDs are non-zero, it is unambiguous to map the zero value to a database NULL. Similarly, if the `phone` column an empty string it will be stored as a NULL in the database. Care should be taken, because there are cases where a zero value and a database NULL do not represent the same thing. There are many cases, however, where this feature can be applied, and the result is simpler code that is easier to read. It is not uncommon to serialize complex objects as JSON text for storage in an SQL database. Native support for JSON is available in some database servers: in partcular Postgres has excellent support for JSON. It is straightforward to use this package to serialize a structure field to JSON: In the example above the `Cmplx` field will be marshaled as JSON text when writing to the database, and unmarshaled into the struct when reading from the database. While most SQL queries accept a fixed number of parameters, if the SQL query contains a `WHERE IN` clause, it requires additional string manipulation to match the number of placeholders in the query with args. This package simplifies queries with a variable number of arguments. When processing an SQL query, it detects if any of the arguments are slices: In the above example, the number of placeholders ("?") in the query will be increased to match the number of values in the `ids` slice. The expansion logic can handle any mix of slice and scalar arguments. A session can create type-safe query functions. This is a very powerful feature and makes it very easy to create type-safe data access. See the Session.MakeQuery function for examples.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document. tools/replicate is a command-line tool for replicating databases from one CouchDB server to another. This is mainly for backup purposes, but you can also use -continuous option to set up automatic replication.
Package radix implements all functionality needed to work with redis and all things related to it, including redis cluster, pubsub, sentinel, scanning, lua scripting, and more. For a single node redis instance use NewPool to create a connection pool. The connection pool is thread-safe and will automatically create, reuse, and recreate connections as needed: If you're using sentinel or cluster you should use NewSentinel or NewCluster (respectively) to create your client instead. Any redis command can be performed by passing a Cmd into a Client's Do method. Each Cmd should only be used once. The return from the Cmd can be captured into any appopriate go primitive type, or a slice, map, or struct, if the command returns an array. FlatCmd can also be used if you wish to use non-string arguments like integers, slices, maps, or structs, and have them automatically be flattened into a single string slice. Cmd and FlatCmd can unmarshal results into a struct. The results must be a key/value array, such as that returned by HGETALL. Exported field names will be used as keys, unless the fields have the "redis" tag: Embedded structs will inline that struct's fields into the parent's: The same rules for field naming apply when a struct is passed into FlatCmd as an argument. Cmd and FlatCmd both implement the Action interface. Other Actions include Pipeline, WithConn, and EvalScript.Cmd. Any of these may be passed into any Client's Do method. There are two ways to perform transactions in redis. The first is with the MULTI/EXEC commands, which can be done using the WithConn Action (see its example). The second is using EVAL with lua scripting, which can be done using the EvalScript Action (again, see its example). EVAL with lua scripting is recommended in almost all cases. It only requires a single round-trip, it's infinitely more flexible than MULTI/EXEC, it's simpler to code, and for complex transactions, which would otherwise need a WATCH statement with MULTI/EXEC, it's significantly faster. All the client creation functions (e.g. NewPool) take in either a ConnFunc or a ClientFunc via their options. These can be used in order to set up timeouts on connections, perform authentication commands, or even implement custom pools. All interfaces in this package were designed such that they could have custom implementations. There is no dependency within radix that demands any interface be implemented by a particular underlying type, so feel free to create your own Pools or Conns or Actions or whatever makes your life easier. Errors returned from redis can be explicitly checked for using the the resp2.Error type. Note that the errors.As function, introduced in go 1.13, should be used. Use the golang.org/x/xerrors package if you're using an older version of go. Implicit pipelining is an optimization implemented and enabled in the default Pool implementation (and therefore also used by Cluster and Sentinel) which involves delaying concurrent Cmds and FlatCmds a small amount of time and sending them to redis in a single batch, similar to manually using a Pipeline. By doing this radix significantly reduces the I/O and CPU overhead for concurrent requests. Note that only commands which do not block are eligible for implicit pipelining. See the documentation on Pool for more information about the current implementation of implicit pipelining and for how to configure or disable the feature. For a performance comparisons between Clients with and without implicit pipelining see the benchmark results in the README.md.
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package couchdb provides components to work with CouchDB 2.x with Go. Resource is the low-level wrapper functions of HTTP methods used for communicating with CouchDB Server. Server contains all the functions to work with CouchDB server, including some basic functions to facilitate the basic user management provided by it. Database contains all the functions to work with CouchDB database, such as documents manipulating and querying. ViewResults represents the results produced by design document views. When calling any of its functions like Offset(), TotalRows(), UpdateSeq() or Rows(), it will perform a query on views on server side, and returns results as slice of Row ViewDefinition is a definition of view stored in a specific design document, you can define your own map-reduce functions and Sync with the database. Document represents a document object in database. All struct that can be mapped into CouchDB document must have it embedded. For example: Then you can call Store(db, &user) to store it into CouchDB or Load(db, user.GetID(), &anotherUser) to get the data from database. ViewField represents a view definition value bound to Document.
Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json. Supported Serialization formats are: To install: This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details. For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer . The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc). Rich Feature Set includes: Users can register a function to handle the encoding or decoding of their custom types. There are no restrictions on what the custom type can be. Some examples: As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like. This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode. Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree. RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package. The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification. The Encoder and Decoder are NOT safe for concurrent use. Consequently, the usage model is basically: Sample usage model: To run tests, use the following: To run the full suite of tests, use the following: You can run the tag 'safe' to run tests or build in safe mode. e.g. Please see http://github.com/ugorji/go-codec-bench . Struct fields matching the following are ignored during encoding and decoding Every other field in a struct will be encoded/decoded. Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package cslb provides transparent HTTP/HTTPS Client Side Load Balancing for Go programs. Cslb intercepts "net/http" Dial Requests and re-directs them to a preferred set of target hosts based on the load balancing configuration expressed in DNS SRV and TXT Resource Records (RRs). Only one trivial change is required to client applications to benefit from cslb which is to import this package and (if needed) enabling it for non-default http.Transport instances. Cslb processing is triggered by the presence of SRV RRs. If no SRVs exist cslb is benign which means you can deploy your application with cslb and independently activate and deactivate cslb processing for each service at any time. No server-side changes are required at all - apart for possibly dispensing with your server-side load-balancers! Importing cslb automatically enables interception for http.DefaultTransport. In this program snippet: the Dial Request made by http.Get is intercepted and processed by cslb. If the application uses its own http.Transport then cslb processing needs to be activated by calling the cslb.Enable() function, i.e.: The cslb.Enable() function replaces http.Transport.DialContext with its own intercept function. Server-side load-balancers are no panacea. They add deployment and diagnostic complexity, cost, throughput constraints and become an additional point of possible failure. Cslb can help you achieve good load-balancing and fail-over behaviour without the need for *any* server-side load-balancers. This is particularly useful in enterprise and micro-service deployments as well as smaller application deployments where configuring and managing load-balancers is a significant resource drain. Cslb can be used to load-balance across geographically dispersed targets or where "hot stand-by" systems are purposely deployed on diverse infrastructure. When cslb intercepts a http.Transport Dial Request to port 80 or port 443 it looks up SRV RRs as prescribed by RFC2782. That is, _http._tcp.$domain and _https._tcp.$domain respectively. Cslb directs the Dial Request to the highest preference target based on the SRV algorithm. If that Dial Request fails, it tries the next lower preference target until a successful connection is returned or all unique targets fail or it runs out of time. Cslb caches the SRV RRs (or their non-existence) as well as the result of Dial Requests to the SRV targets to optimize subequent intercepted calls and the selection of preferred targets. If no SRV RRs exist, cslb passes the Dial Request on to net.DialContext. Cslb has specific rules about when interception occurs. It normally only considers intercepting port 80 and port 443 however if the "cslb_allports" environment variable is set, cslb intercepts non-standard HTTP ports and maps them to numeric service names. For example http://example.net:8080 gets mapped to _8080._tcp.example.net as the SRV name to resolve. While cslb runs passively by caching the results of previous Dial Requests, it can also run actively by periodically performing health checks on targets. This is useful as an administrator can control health check behaviour to move a target "in and out of rotation" without changing DNS entries and waiting for TTLs to age out. Health checks are also likely to make the application a little more responsive as they are less likely to make a dial attempt to a target that is not working. Active health checking is enabled by the presence of a TXT RR in the sub-domain "_$port._cslb" of the target. E.g. if the SRV target is "s1.example.net:80" then cslb looks for the TXT RR at "_80._cslb.s1.example.net". If that TXT RR contains a URL then it becomes the health check URL. If no TXT RR exists or the contents do not form a valid URL then no active health check is performed for that target. The health check URL does not have to be related to the target in any particular way. It could be a URL to a central monitoring system which performs complicated application level tests and performance monitoring. Or it could be a URL on the target system itself. A health check is considered successful when a GET of the URL returns a 200 status and the content contains the uppercase text "OK" somewhere in the body (See the "cslb_hc_ok" environment variable for how this can be modified). Unless both those conditions are met the target is considered unavailable. Active health checks cease once a target becomes idle for too long and health check Dial Requests are *not* get intercepted by cslb. If your current service exists on a single server called "s1.example.net" and you want to spread the load across additional servers "s2.example.net" and "s3.example.net" and assuming you've added the "cslb" package to your application then the following DNS changes active cslb processing: Current DNS Additional DNS A number of observations about this DNS setup: Cslb maintains a cache of SRV lookups and the health status of targets. Cache entries automatically age out as a form of garbage collection. Removed cache entries stop any associated active health checks. Unfortunately the cache ageing does not have access to the DNS TTLs associated with the SRV RRs so it makes a best-guess at reasonable time-to-live values. The important point to note is that *all* values get periodically refreshed from the DNS. Nothing persists internally forever regardless of the level of activity. This means you can be sure that any changes to your DNS will be noticed by cslb in due course. Cslb optional runs a web server which presents internal statistics on its performance and activity. This web service has *no* access controls so it's best to only run it on a loopback address. Setting the environment variable "cslb_listen" to a listen address activates the status server. E.g.: On initialization the cslb package examines the "cslb_options" environment variable for single letter options which have the following meaning: An example of how this might by used from a shell: Many internal configuration values can be over-ridden with environment variables as shown in this table: Any values which are invalid or fall outside a reasonable range are ignored. Cslb only knows about the results of network connection attempts made by DialContext and the results of any configured health checks. If a service is accepting network connections but not responding to HTTP requests - or responding negatively - the client experiences failures but cslb will be unaware of these failures. The result is that cslb will continue to direct future Dial Requests to that faulty service in accordance with the SRV priorities. If your service is vulnerable to this scenario, active health checks are recommended. This could be something ss simple as an on-service health check which responds based on recent "200 OK" responses in the service log file. Alternatively an on-service monitor which closes the listen socket will also work. In general, defining a failing service is a complicated matter that only the application truly understands. For this reason health checks are used as an intermediary which does understand application level failures and converts them to simple language which cslb groks. While every service is different there are a few general guidelines which apply to most services when using cslb. First of all, run simple health checks if you can and configure them for use by cslb. Second, have each target configured with both ipv4 and ipv6 addresses. This affords two potentially independent network paths to the targets. Furthermore, net.Dialer attempts both ipv4 and ipv6 connections simultaneously which maximizes responsiveness for the client. Third, consider a "canary" target as a low preference (highest numeric value SRV priority) target. If this "canary" target is accessed by cslb clients it tells you they are having trouble reaching their "real" targets. Being able to run a "canary" service is one of the side-benefits of cslb and SRVs. Whan analyzing the Status Web Page or watching the Run Time Control output, observers need to be aware of caching by the http (and possibly other) packages. For example not every call to http.Get() results in a Dial Request as httpClient tries to re-use connections. In a similar vein if you change a DNS entry and don't believe cslb has noticed this change within an appropriate TTL amount of time, be aware that on some platforms the intervening recursive resolvers adjust TTLs as they see fit. For example some home-gamer routers are known to increase short TTLs to values they believe to be a more "appropriate" in an attempt to reduce their cache churn. Perhaps the biggest caveat of all is that cslb relies on being enabled for all http.Transports in use by your application. If you are importing a package (either directly or indirectly) which constructs its own http.Transports then you'll need to modify that package to call cslb.Enable() otherwise those http requests will not be intercepted. Of course if the package is making requests incidental to the core functionality of your application then maybe it doesn't matter and you can leave them be. Something to be aware of. -----
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package sessions provides tools to manage cookie-based web sessions. Special emphasis is placed on security by implementing OWASP recommendations, specifically the following features: In addition, the package provides the following functionality: While simple to use, the package offers a number of extensively documented configuration variables. It also does not assume specific backend technologies. That is, any session storage system may be used simply by implementing the PersistenceLayer interface (or parts of it). This package is currently not written to be run on multiple machines in a distributed fashion without a load balancer that implements sticky sessions. This may change in the future. Although some more configuration needs to happen for production readiness, the package's defaults allow you to get started very quickly. To get access to the current session, simply call Start(): By providing "true" instead of "false" to the Start() function, you can force the creation of a session, even if there previously was none. Once you have a session, you can identify a user across multiple HTTP requests. You may add values to the session, attach a user to it, cause its session ID to change, or destroy it again. For more extensive user-centered functions (for example, signing up, logging in and out, changing passwords etc.), see the subdirectory "users". Before putting your application into production, you must implement the NewSessionCookie function: You may choose a different expiry date, domain, and path but the other fields are mandatory (given that you are using TLS which you certainly should). You can change the name of the cookie by changing the SessionCookie variable. The default is the inconspicuous string "id". The following timeout values may be adjusted according to the requirements of your application: To further reduce the risk of session hijacking attacks, this package checks client IP addresses as well as user agent strings and destroys sessions if changes in these properties were detected. Refer to the AcceptRemoteIP and AcceptChangingUserAgent variables for more information. Sessions are stored in a local RAM cache (which is a simpe map) whose size is defined by the MaxSessionCacheSize variable. If you set this variable to 0, no sessions are held locally. The SessionCacheExpiry controls when a session will be purged from the cache based on the last time it was used. The cache is write-through (except for session last access times). That is, every time a change was made to a session, that change is forwarded to the package's persistence layer to be saved. The persistence layer is a collection of functions which allow the storage and retrieval of objects from a permanent data store. For example, you may use an SQL database or a key-value store. See the documentation of PersistenceLayer for details on the functions to be implemented. If you need to implement only some of the functions, you may use ExtendablePersistenceLayer instead of creating your own class. The package default is to do nothing. That is, sessions are not persisted and therefore will get lost when purged from the local cache or when the application exits. Session objects implement gob.GobEncoder/gob.GobDecoder and json.Marshaler/json.Unmarshaler. While encoding to JSON allows you to easily inspect session attributes in your database, GOB serialization is preferred as it will restore session objects precisely. (For example, the JSON package always unmarshals numbers into floats even if they were originally integers.) It is recommended that you purge your data store from expired sessions from time to time, e.g. by using a cron job, because users may abandon your website which will leave old sessions in your store. It is recommended to call PurgeSessions() before exiting the program. This will cause session last access times to be updated. This package provides a number of utility functions which may be useful in the context of session and user management. The CUID() function generates Base-62 "compact unique identifiers" suitable for user IDs. The RandomID() function generates random Base-62 strings of any length. The ReasonablePassword() function checks the strength of a password based on the recommendations of NIST SP 800-63B.